๐ฐ Coin Change: The First-Principles Journey
From intuitive greedy attempts to discovering optimal substructure - how a problem-solver uncovers the dynamic programming solution
Coin Change
You are given an integer array coins
representing coins of different denominations and an integer amount
representing a total amount of money.
Return the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1
.
You may assume that you have an infinite number of each kind of coin.
Example 1:
Input: coins = [1, 2, 5], amount = 11
Output: 3
Explanation: 11 = 5 + 5 + 1
Example 2:
Input: coins = [2], amount = 3
Output: -1
Explanation: Cannot make 3 with only 2-value coins
Example 3:
Input: coins = [1], amount = 0
Output: 0
Explanation: No coins needed for amount 0
Constraints:
- 1 โค coins.length โค 12
- 1 โค coins[i] โค 2ยณยน - 1
- 0 โค amount โค 10โด
๐ง DESIGNER'S BRAIN: The Discovery Processโ
Step 1: What are we REALLY being asked?โ
Before touching code, let's understand the problem deeply:
coins = [1, 2, 5], amount = 11
Question: What's the MINIMUM number of coins to make 11?
The messy initial thoughts:
"I could use greedy: take largest coin first?"
Let's try:
- Use 5 โ remaining = 6
- Use 5 โ remaining = 1
- Use 1 โ remaining = 0
- Result: 5, 5, 1 โ 3 coins โ
Looks good! But wait... let's test with another example:
coins = [1, 3, 4], amount = 6
Greedy approach:
- Use 4 โ remaining = 2
- Use 1 โ remaining = 1
- Use 1 โ remaining = 0
- Result: 4, 1, 1 โ 3 coins
Optimal approach:
- Use 3 โ remaining = 3
- Use 3 โ remaining = 0
- Result: 3, 3 โ 2 coins โ
Greedy fails! This isn't a greedy problem.
"Maybe try all combinations?"
Yes! But how do we organize this search efficiently?
Step 2: The KEY INSIGHT (Optimal Substructure)โ
What property makes this solvable with dynamic programming?
If I know the minimum coins for amounts n-1, n-2, n-5, I can find minimum for n.
Let's discover this through example:
amount = 11, coins = [1, 2, 5]
To make 11, I MUST use one of 5 as a coin.
The key realization:
- If I use coin 1 โ I need min coins for (11-1=10) + 1
- If I use coin 2 โ I need min coins for (11-2=9) + 1
- If I use coin 5 โ I need min coins for (11-5=6) + 1
Answer = minimum of these three choices!
The "aha moment": The problem breaks into smaller identical problems!
Step 3: What's the natural way to explore this?โ
Recursive thinking (how humans naturally think):
"To solve 11, I need to solve 10, 9, and 6 first"
"To solve 10, I need to solve 9, 8, and 5 first"
And so on...
Base case: amount = 0
needs 0
coins
- First: Recognize we need to try all coins
- Second: Realize each choice leads to a smaller problem
- Third: Identify we're solving same subproblems repeatedly
- Fourth: Realize we can cache results (memoization)
- Fifth: Notice we can build bottom-up instead
๐ป CODER'S BRAIN: Implementation Journeyโ
Version 1: Pure Recursion (The Natural Discovery)โ
Because this mirrors how you'd solve it by hand. No optimization, just raw logic.
class Solution {
public int coinChange(int[] coins, int amount) {
// Edge case: if amount is 0, we need 0 coins
if (amount == 0) return 0;
int result = coinChangeRecursive(coins, amount);
return result == Integer.MAX_VALUE ? -1 : result;
}
private int coinChangeRecursive(int[] coins, int amount) {
// Base case: no amount left = 0 coins needed
if (amount == 0) return 0;
// Invalid case: negative amount = impossible
if (amount < 0) return Integer.MAX_VALUE;
int minCoins = Integer.MAX_VALUE;
// Try using each coin as the "next" coin
for (int coin : coins) {
int subResult = coinChangeRecursive(coins, amount - coin);
// If subproblem was solvable, consider it
if (subResult != Integer.MAX_VALUE) {
minCoins = Math.min(minCoins, subResult + 1);
}
}
return minCoins;
}
}
๐ Let's Trace an Exampleโ
coins = [1, 2, 5], amount = 11
coinChange(11)
โโ try coin 1 โ coinChange(10)
โ โโ try coin 1 โ coinChange(9)
โ โ โโ try coin 1 โ coinChange(8)
โ โ โ โโ ... keeps going
โ โ โโ try coin 2 โ coinChange(7)
โ โ โโ try coin 5 โ coinChange(4)
โ โโ try coin 2 โ coinChange(8)
โ โโ try coin 5 โ coinChange(5)
โโ try coin 2 โ coinChange(9)
โโ try coin 5 โ coinChange(6)
โโ try coin 1 โ coinChange(5) โ LOOK! We solve this AGAIN
โโ try coin 2 โ coinChange(4)
โโ try coin 5 โ coinChange(1)
We're recalculating coinChange(5)
, coinChange(4)
, etc. MANY times!
Time Complexity: O(coins^amount) - exponential! ๐ฅ
Version 2: Top-Down with Memoization (The First Optimization)โ
"Wait... I'm solving coinChange(5) multiple times. What if I remember the answer?"
Why this data structure?
- Array
memo[]
: Index represents amount, value represents min coins - Why array? Fast O(1) lookup by amount
- Size:
amount + 1
to include 0
class Solution {
public int coinChange(int[] coins, int amount) {
// memo[i] = minimum coins needed for amount i
// -1 means "not computed yet"
int[] memo = new int[amount + 1];
Arrays.fill(memo, -1);
int result = coinChangeTopDown(coins, amount, memo);
return result == Integer.MAX_VALUE ? -1 : result;
}
private int coinChangeTopDown(int[] coins, int amount, int[] memo) {
// Base cases
if (amount == 0) return 0;
if (amount < 0) return Integer.MAX_VALUE;
// Check if already computed
if (memo[amount] != -1) {
return memo[amount];
}
int minCoins = Integer.MAX_VALUE;
// Try each coin
for (int coin : coins) {
int subResult = coinChangeTopDown(coins, amount - coin, memo);
if (subResult != Integer.MAX_VALUE) {
minCoins = Math.min(minCoins, subResult + 1);
}
}
// Store and return
memo[amount] = minCoins;
return minCoins;
}
}
๐ What Changed?โ
Before memoization:
coinChange(5)
called 10 times โ each time recalculates
After memoization:
coinChange(5)
called 10 times โ computed once, cached 9 times
Time Complexity: O(amount ร coins)
- Each of
amount
states computed once - Each computation tries all coins
Space Complexity: O(amount) for memo + O(amount) recursion stack
Version 3: Bottom-Up DP (The Ultimate Insight)โ
"I'm building from smaller amounts to larger. Why not just... start from smallest?"
Chronological discovery:
- "To find answer for 11, I need answers for smaller amounts"
- "What if I compute 1, then 2, then 3... up to 11?"
- "Then when I reach 11, all smaller answers are ready!"
- "No recursion needed!"
Why this approach?
Prerequisite insight: If we solve problems in increasing order of amount, each problem's dependencies are already solved.
class Solution {
public int coinChange(int[] coins, int amount) {
// dp[i] = minimum coins to make amount i
int[] dp = new int[amount + 1];
// Initialize: all amounts are impossible by default
Arrays.fill(dp, amount + 1); // Using amount+1 as "infinity"
// Base case: 0 amount needs 0 coins
dp[0] = 0;
// Build up from amount 1 to target amount
for (int currentAmount = 1; currentAmount <= amount; currentAmount++) {
// Try using each coin for current amount
for (int coin : coins) {
// Can we use this coin?
if (currentAmount >= coin) {
// dp[currentAmount] = min of:
// - current best
// - using this coin + best for remaining amount
dp[currentAmount] = Math.min(
dp[currentAmount],
dp[currentAmount - coin] + 1
);
}
}
}
// If still "infinity", it's impossible
return dp[amount] > amount ? -1 : dp[amount];
}
}
๐ Detailed Walkthroughโ
coins = [1, 2, 5], amount = 11
Initial state:
dp = [0, โ, โ, โ, โ, โ, โ, โ, โ, โ, โ, โ]
โ
amount=0
Building amount = 1:
- try coin 1:
dp[1] = min(โ, dp[0]+1) = 1
- try coin 2: can't use (1 < 2)
- try coin 5: can't use (1 < 5)
dp = [0, 1, โ, โ, โ, โ, โ, โ, โ, โ, โ, โ]
Building amount = 2:
- try coin 1:
dp[2] = min(โ, dp[1]+1) = 2
- try coin 2:
dp[2] = min(2, dp[0]+1) = 1
โ Better! - try coin 5: can't use
dp = [0, 1, 1, โ, โ, โ, โ, โ, โ, โ, โ, โ]
Building amount = 3:
- try coin 1:
dp[3] = min(โ, dp[2]+1) = 2
- try coin 2:
dp[3] = min(2, dp[1]+1) = 2
- try coin 5: can't use
dp = [0, 1, 1, 2, โ, โ, โ, โ, โ, โ, โ, โ]
Building amount = 5:
- try coin 1:
dp[5] = min(โ, dp[4]+1) = 4
- try coin 2:
dp[5] = min(4, dp[3]+1) = 3
- try coin 5:
dp[5] = min(3, dp[0]+1) = 1
โ Best!
dp = [0, 1, 1, 2, 2, 1, โ, โ, โ, โ, โ, โ]
... continuing ...
Final state:
dp = [0, 1, 1, 2, 2, 1, 2, 2, 3, 3, 2, 3]
โ
answer = 3
Complexity Analysis:
- Time Complexity: O(amount ร coins)
- Space Complexity: O(amount) - no recursion stack!
๐ฏ EDGE CASES: Validate Your Understandingโ
Edge Case 1: Impossible Amountโ
coins = [2], amount = 3
All coins are even, can't make odd amount.
How does our code handle it?
dp = [0, โ, 1, โ]
โ โ
can't still โ
make 1 โ return -1
Edge Case 2: Amount = 0โ
coins = [1], amount = 0
Answer: 0 coins (base case)
Edge Case 3: Only One Coin Typeโ
coins = [5], amount = 11
Not possible since 11 is not divisible by 5.
dp = [0, โ, โ, โ, โ, 1, โ, โ, โ, โ, 2, โ]
โ
still โ
Edge Case 4: Coin Larger Than Amountโ
coins = [10], amount = 5
Our loop skips it because currentAmount >= coin
is false.
The condition naturally handles this edge case!
๐งฉ META-PROCESS: How to Approach ANY DP Problemโ
1. Identify if it's DP:
- Are you finding optimal value? (min/max/count)
- Can you break into smaller identical subproblems?
- Do subproblems overlap?
2. Define the state:
- "What information do I need to solve this?"
- Here: Just the amount remaining
3. Find the recurrence:
- "How does this state relate to smaller states?"
- Here:
dp[i] = min(dp[i - coin] + 1)
for all coins
4. Identify base case:
- "What's the simplest case I know the answer to?"
- Here:
dp[0] = 0
5. Decide direction:
- Top-down (recursion + memo): Natural thinking
- Bottom-up (iteration): Optimal implementation
๐ CONNECTING TO PRIOR KNOWLEDGEโ
This is like:
- Fibonacci: Building answers from smaller problems
- Climbing Stairs: Choose from multiple options at each step
- Graph traversal: Exploring all paths, finding shortest
Distinctive differences:
- Unlike Fibonacci (2 previous), here we look back by coin amounts
- Unlike graph BFS (finds shortest path), here we use DP (finds min cost)
- Unbounded: Can use same coin multiple times (vs 0/1 knapsack)
๐ Complete Implementationsโ
JavaScript Implementationโ
/**
* @param {number[]} coins
* @param {number} amount
* @return {number}
*/
var coinChange = function(coins, amount) {
// dp[i] = minimum coins to make amount i
const dp = new Array(amount + 1).fill(amount + 1);
// Base case: 0 amount needs 0 coins
dp[0] = 0;
// Build up from amount 1 to target
for (let currentAmount = 1; currentAmount <= amount; currentAmount++) {
// Try each coin
for (const coin of coins) {
if (currentAmount >= coin) {
dp[currentAmount] = Math.min(
dp[currentAmount],
dp[currentAmount - coin] + 1
);
}
}
}
// If still "infinity", impossible
return dp[amount] > amount ? -1 : dp[amount];
};
Java Implementationโ
class Solution {
public int coinChange(int[] coins, int amount) {
// dp[i] = minimum coins to make amount i
int[] dp = new int[amount + 1];
// Initialize with "infinity" (amount + 1 works)
Arrays.fill(dp, amount + 1);
// Base case: 0 amount needs 0 coins
dp[0] = 0;
// Build up from amount 1 to target
for (int currentAmount = 1; currentAmount <= amount; currentAmount++) {
// Try each coin
for (int coin : coins) {
if (currentAmount >= coin) {
dp[currentAmount] = Math.min(
dp[currentAmount],
dp[currentAmount - coin] + 1
);
}
}
}
// If still "infinity", impossible
return dp[amount] > amount ? -1 : dp[amount];
}
}
๐ก Deep Understanding Questionsโ
Question 1: Why can't we sort coins and use greedy?โ
Click to reveal answer
Greedy algorithms make locally optimal choices, but coin change requires globally optimal solutions.
Example: coins = [1, 3, 4], amount = 6
- Greedy: 4 + 1 + 1 = 3 coins
- Optimal: 3 + 3 = 2 coins
The largest coin isn't always part of the optimal solution!
Question 2: What if coins = [1]? Does our solution simplify?โ
Click to reveal answer
Yes! If we only have coin value 1:
dp[i] = dp[i-1] + 1 = i
- We need exactly
amount
coins - The DP simplifies to a direct calculation
But our general solution still works correctly!
Question 3: How would you modify this to return the actual coins used?โ
Click to reveal answer
Track the path:
int[] dp = new int[amount + 1];
int[] parent = new int[amount + 1]; // Track which coin was used
// During DP:
if (dp[currentAmount - coin] + 1 < dp[currentAmount]) {
dp[currentAmount] = dp[currentAmount - coin] + 1;
parent[currentAmount] = coin; // Remember the coin
}
// Backtrack to find coins:
List<Integer> coins = new ArrayList<>();
int curr = amount;
while (curr > 0) {
coins.add(parent[curr]);
curr -= parent[curr];
}
Question 4: What if we wanted the MAXIMUM number of coins?โ
Click to reveal answer
Change Math.min
to Math.max
!
This would find the way to make the amount using the most coins possible.
For coins = [1, 2, 5], amount = 11
:
- Min: 3 coins (5 + 5 + 1)
- Max: 11 coins (1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1)
๐ฏ Key Takeawaysโ
1. Pattern Recognition:
- Greedy doesn't work โ need to explore all possibilities
- Overlapping subproblems โ perfect for DP
2. State Definition:
dp[i]
= minimum coins needed for amounti
- Simple 1D array suffices
3. Recurrence Relation:
- For each coin, try using it and take the minimum
dp[i] = min(dp[i], dp[i - coin] + 1)
4. Base Case:
dp[0] = 0
(no coins needed for zero amount)
5. Direction:
- Build from bottom-up (1 to amount)
- Each state depends only on smaller amounts
๐ Related Problemsโ
Same pattern, different questions:
-
Coin Change โ You are here
- Question: Minimum coins to make amount?
-
Coin Change II (LC 518)
- Question: How many ways to make amount?
- Change: Count combinations instead of minimum
-
Minimum Cost For Tickets (LC 983)
- Question: Minimum cost for travel days?
- Similar: Choose from multiple options repeatedly
-
Perfect Squares (LC 279)
- Question: Minimum perfect squares to sum to n?
- Same: Unbounded choice, minimize count
๐ Where Might Confusion Still Lurk?โ
Reflect on these questions:
- Why does
dp[i - coin] + 1
represent using a coin? - Why do we initialize with
amount + 1
instead ofInteger.MAX_VALUE
? - How does the condition
currentAmount >= coin
prevent errors? - Why can we use the same coin multiple times?
Understanding your confusion points is how you deepen your mastery.
The questions you ask reveal the insights you're about to discover.
๐ The Meta-Lessonโ
From confusion to clarity:
- โ "Try greedy" โ Fails on some inputs
- ๐ค "Try all possibilities" โ Too slow
- ๐ก "Cache repeated work" โ Memoization works
- โจ "Build bottom-up" โ Optimal solution
The pattern emerges:
- Recognize optimal substructure
- Identify overlapping subproblems
- Define state and recurrence
- Choose implementation strategy
Master this process, not just this problem.
The thinking framework transfers to hundreds of DP problems.
The problem transformation is the breakthrough.
Once you see "minimum coins" as "minimize over all coin choices at each step," the solution becomes a natural application of dynamic programming.
Master the transformation, and the implementation follows naturally.