Skip to main content

Why Greedy Fails: Discovering Dynamic Programming Through Coins

· 18 min read
Mahmut Salman
Software Developer

A conversation about solving the Minimum Coin Combination problem that reveals why greedy algorithms sometimes fail—and how recognizing overlapping subproblems is the foundation of dynamic programming thinking.


The Problem

Teacher: Today we're tackling one of the most classic dynamic programming problems in interviews. Let me show you the Minimum Coin Combination problem:

Problem: Minimum Coin Combination

You are given an array of coin values and a target amount of money. Return the minimum number of coins needed to total the target amount. If this isn't possible, return -1. You may assume there's an unlimited supply of each coin.

Example 1:

Input: coins = [1, 2, 3], target = 5
Output: 2
Explanation: Use one 2-dollar coin and one 3-dollar coin to make 5 dollars.

Example 2:

Input: coins = [2, 4], target = 5
Output: -1
Explanation: Cannot make 5 dollars with only 2 and 4 dollar coins.

Understanding the Problem First

Teacher: Before we jump into coding, let's make sure we understand what we're looking for.

You have:

  • An array of coin denominations (like [1, 2, 3])
  • A target amount to make (like 5 dollars)
  • Unlimited supply of each coin type

You need to find: The minimum number of coins to make that exact target.

Question for you: If you had coins [1, 2, 3] and needed to make 5 dollars, what are ALL the possible ways you could make it?

Think about:

  • Could you use only 1-dollar coins?
  • Could you mix different coins?
  • Which combination uses the fewest coins?

Me: Hmm, let me think...

Possible ways:

  1. Five 1-dollar coins: 1+1+1+1+1 = 5 coins
  2. One 2-dollar + three 1-dollar: 2+1+1+1 = 4 coins
  3. Two 2-dollar + one 1-dollar: 2+2+1 = 3 coins
  4. One 3-dollar + two 1-dollar: 3+1+1 = 3 coins
  5. One 3-dollar + one 2-dollar: 3+2 = 2 coins

The minimum is 2 coins!


My Initial Intuition: Go Greedy!

Me: I think I see a pattern. It's less if we use bigger fragments, right?

So maybe I should always pick the biggest coin first? That would give me fewer coins!

Teacher: Exactly! Your instinct is right - using bigger coins usually means fewer coins total. That's greedy thinking, which is great!

But here's where it gets interesting... Let me challenge your thinking a bit.


The Counterexample That Breaks Greedy

Teacher: Consider this scenario:

Coins: [1, 3, 4]
Target: 6

If we always pick the biggest coin first (greedy approach):

Greedy attempt:

  1. Take 4 → remaining = 2
  2. Can't use 4 or 3 → take 1 → remaining = 1
  3. Take 1 → remaining = 0

Total: 3 coins (4 + 1 + 1)

But wait... what if we used two 3-dollar coins instead?

Better solution:

  1. Take 3 → remaining = 3
  2. Take 3 → remaining = 0

Total: 2 coins (3 + 3) ✨

Me: Oh no! So the greedy approach doesn't always work!

Teacher: Exactly! This is a perfect example of why greedy algorithms don't always find the optimal solution.


The Key Insight: Overlapping Subproblems

Teacher: Now, let's think differently. Here's the key question to crack this problem:

If you knew the minimum coins needed for amounts 0, 1, 2, 3, and 4... could you figure out the minimum coins for amount 5?

Think about it this way - to make 5 dollars, you could:

  1. Add a 1-dollar coin to the solution for 4 dollars
  2. Add a 2-dollar coin to the solution for 3 dollars
  3. Add a 3-dollar coin to the solution for 2 dollars

What pattern do you see here? How would you pick which option gives the minimum?

Me: Wait... I think you're trying to show me the recurring subproblems!

I get it! If I know the answer for x-1 target, I could add 1 to it since there's only one way, which is using a coin with value 1.

But the thing you're showing me is different and more important...


My Learning Breakthrough

Me: You said:

  • You could add a 1-dollar coin to the solution for 4 dollars
  • You could add a 2-dollar coin to the solution for 3 dollars
  • You could add a 3-dollar coin to the solution for 2 dollars

You're showing me the different ways for reaching target 5.

I know that for DP problems, first I need to find an unoptimized recursive solution, then an optimized recursive solution using memoization, and then by gathering core logic from these two, it becomes clearer to see how I can build a bottom-up solution.

At least this is what I do.

But the thing you're showing me is far more important because it shows the main idea behind recognizing this problem as recurring subproblems. Because it's the foundation of DP and recursion problems - recognizing subproblems.

But now I realize that even though I know the approach sequence, my brain tends to find what you're asking me by thinking as "changing target" (finding all answers), but this is not helpful for seeing a question for the first time and understanding which algorithm approach will be helpful.

Because during real interviews, memorization won't be helpful. Instead, I need to grasp the core logic and be intuitive at that very moment. To do that, I need strong foundation and understanding.

Thank you for asking me that question. I found my previous approach wrong - I mean, not wrong, but not helpful for finding unoptimized recursive solutions. Because what I need to do is find recurring subproblems. I already know that, but I haven't used that thought process before.

I am learning. This is good.

Teacher: Wow, that's a brilliant reflection! You just had a major breakthrough. 🎯

You're absolutely right - memorizing the tabulation approach won't help in interviews. You need to see the subproblems first. Let's build that intuition together!


Building the Recursive Intuition

Teacher: Let's start fresh with recursive thinking. Forget tables for now.

The Core Question

At any point, when you're trying to make amount target, you have a decision to make:

Which coin should I use?

Once you pick a coin, say coin value c, you've reduced your problem to:

"Now make amount target - c with minimum coins"

So here's the recursive structure emerging:

To find minimum coins for amount 5:
→ Try using coin 1: 1 + (minimum coins for amount 4)
→ Try using coin 2: 1 + (minimum coins for amount 3)
→ Try using coin 3: 1 + (minimum coins for amount 2)
→ Pick the minimum of all these options

Question 1: What would be the base cases? Think about:

  • What if target is 0?
  • What if target is negative?

Question 2: Can you write the recursive function signature and pseudocode?

public int func(int[] arr, int target) {
// What goes here?
}

Understanding Two Perspectives

Me: But somehow these two seem different:

First version:

  • You could add a 1-dollar coin to the solution for 4 dollars
  • You could add a 2-dollar coin to the solution for 3 dollars
  • You could add a 3-dollar coin to the solution for 2 dollars

Second version:

To find minimum coins for amount 5:
→ Try using coin 1: 1 + (minimum coins for amount 4)
→ Try using coin 2: 1 + (minimum coins for amount 3)
→ Try using coin 3: 1 + (minimum coins for amount 2)

In the latter one, I feel like I am just trying all possibilities. This is actually what I do. But in the former one, I see something different. It's more apparent seeing subproblems in the former one.

Teacher: Excellent observation! You're picking up on something subtle but important.

The first framing ("add a coin to a solution") makes you think:

  • "I already have the optimal solution for 4, now I just add one more coin"
  • This feels like building up from smaller solutions

The second framing ("try using coin X") makes you think:

  • "I'm trying all possibilities exhaustively"
  • This feels like breaking down the problem

Both are the same recursive structure, just different mental models!

The key insight is: when you "try using coin 1", you're essentially saying "I commit to using this coin, now solve the smaller problem."


My First Attempt at Recursion

Me: About recursive calls I'd make:

func(){
x = func(using denomination 1);
y = func(denomination 2);
z = func(denomination 3);

return Math.min(x, y, z);
}

Teacher: You're on the right track! But let me push your thinking:

func(){
x = func(using denomination 1); // But what parameters?
y = func(denomination 2); // How do you "use" a coin?
z = func(denomination 3);

return Math.min(x, y, z);
}

Key questions:

  1. When you "use" a coin of value 3, what happens to your target?
  2. If target was 5, after using coin 3, your new target is...?
  3. What if func returns -1 (impossible to make that amount)?
  4. Can you still do Math.min(x, y, z) if some are -1?

Base cases:

if (target == 0) return ??;  // No more money needed
if (target < 0) return ??; // Overshot - impossible path

The Representation Insight

Me: Amazing question!

I learned that as important as understanding DP and recursion, it has the same importance to know how to represent the 3D real world in the computer world.

The usage of a coin could be represented as deducing the target by that amount, and we can use this information in the function's parameters (stateful).

Like if we use coin 1, then function call would be func(target - 1).

You are amazing to bring this up: "// How do you "use" a coin?" because I've struggled with this before!

About handling -1: I can add some edge case for impossible amounts like Integer.MAX_VALUE? There was something like that in Java, so that in the Math.min method we won't face any issue that will lead to miscalculation.

Teacher: You're absolutely right - representing the real world in code is half the battle! And yes, Integer.MAX_VALUE is a clever way to handle impossible cases.


Completing the Recursive Structure

Teacher: Since you're getting tired, let me give you the structure, and you just need to fill in a few blanks:

public int func(int[] arr, int target) {
// Base case 1: We made exact change!
if (target == 0) {
return ____; // How many coins to make 0?
}

// Base case 2: We overshot
if (target < 0) {
return ____; // What signals "impossible"?
}

int minCoins = Integer.MAX_VALUE;

// Try each coin
for (int i = 0; i < arr.length; i++) {
int result = func(arr, target - arr[i]);

// If this path was possible...
if (result != -1) {
minCoins = Math.min(minCoins, ____ + result);
// ^^^^
// We used one coin!
}
}

// Did we find any valid combination?
if (minCoins == Integer.MAX_VALUE) {
return ____;
}
return minCoins;
}

Fill in these 4 blanks:

  1. Coins needed for 0?
  2. What to return for impossible?
  3. What to add to result?
  4. No valid path found?

My Answers

Me:

  1. When we hit the target, I mean if target is 0, then we should return 0
  2. If path is impossible, we need to return -1
  3. About what to add to result: we need to increase it by one
  4. I don't see any difference between "no valid path found" and "return for impossible," so again we return -1?

Teacher: Perfect! You got ALL of them right! 🎉

Let me clarify the one confusion you have:

"Path impossible" (target < 0):

  • We went too far! Like trying to make 5 with a 7-dollar coin.

"No valid path found" (minCoins == Integer.MAX_VALUE):

  • We tried ALL coins, none worked. Like trying to make 5 with only [2, 4] coins.

The Complete Recursive Solution

Here's your complete solution:

public int func(int[] arr, int target) {
// Base case 1: We made exact change!
if (target == 0) {
return 0; // ✅ Zero coins needed!
}

// Base case 2: We overshot
if (target < 0) {
return -1; // ✅ This path is impossible
}

int minCoins = Integer.MAX_VALUE;

// Try each coin
for (int i = 0; i < arr.length; i++) {
int result = func(arr, target - arr[i]);

// If this path was possible...
if (result != -1) {
minCoins = Math.min(minCoins, 1 + result); // ✅ We used ONE coin!
}
}

// Did we find any valid combination?
if (minCoins == Integer.MAX_VALUE) {
return -1; // ✅ No way to make this amount
}
return minCoins;
}

Complexity:

  • Time: O(amount^coins) - exponential! Many repeated subproblems
  • Space: O(amount) - recursion stack depth

Tracing Through an Example

Let's trace coins=[1, 2, 3], target=5:

func(5)
├─ Try coin 1: 1 + func(4)
│ ├─ Try coin 1: 1 + func(3)
│ │ ├─ Try coin 3: 1 + func(0)
│ │ │ └─ func(0) = 0
│ │ │ Result: 1 + 0 = 1
│ │ Return: 1
│ │ Result: 1 + 1 = 2
│ Return: 2
│ Result: 1 + 2 = 3

├─ Try coin 2: 1 + func(3)
│ ├─ Try coin 3: 1 + func(0)
│ │ └─ func(0) = 0
│ │ Result: 1 + 0 = 1
│ Return: 1
│ Result: 1 + 1 = 2

├─ Try coin 3: 1 + func(2)
│ ├─ Try coin 2: 1 + func(0)
│ │ └─ func(0) = 0
│ │ Result: 1 + 0 = 1
│ Return: 1
│ Result: 1 + 1 = 2

Return: min(3, 2, 2) = 2 ✅

Notice how func(0) gets called multiple times? That's the overlapping subproblems!


Visualizing the Recursion Tree

For coins=[1, 2, 3], target=5:

                    func(5)
/ | \
coin 1 coin 2 coin 3
↓ ↓ ↓
func(4) func(3) func(2)
/ | \ / | \ / | \
1 2 3 1 2 3 1 2 3
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
(3) (2) (1) (2) (1) (0) (1) (0)(-1)

See all the repeated work? func(3), func(2), func(1), func(0) are calculated multiple times!

This is why we need memoization - to cache these repeated calculations.


Testing the Solution

public class MinimumCoinCombination {
public static int func(int[] coins, int target) {
// Base case 1: Exact change
if (target == 0) return 0;

// Base case 2: Overshot
if (target < 0) return -1;

int minCoins = Integer.MAX_VALUE;

// Try each coin
for (int i = 0; i < coins.length; i++) {
int result = func(coins, target - coins[i]);

if (result != -1) {
minCoins = Math.min(minCoins, 1 + result);
}
}

return (minCoins == Integer.MAX_VALUE) ? -1 : minCoins;
}

public static void main(String[] args) {
// Test case 1: Example 1
int[] coins1 = {1, 2, 3};
int target1 = 5;
System.out.println(func(coins1, target1)); // Expected: 2

// Test case 2: Example 2 (impossible)
int[] coins2 = {2, 4};
int target2 = 5;
System.out.println(func(coins2, target2)); // Expected: -1

// Test case 3: Greedy fails
int[] coins3 = {1, 3, 4};
int target3 = 6;
System.out.println(func(coins3, target3)); // Expected: 2 (3+3, not 4+1+1)

// Test case 4: Single coin
int[] coins4 = {5};
int target4 = 10;
System.out.println(func(coins4, target4)); // Expected: 2

// Test case 5: Target is 0
int[] coins5 = {1, 2, 3};
int target5 = 0;
System.out.println(func(coins5, target5)); // Expected: 0
}
}

Understanding the Inefficiency

Let's count how many times we call func() for target=5:

func(5) → 1 call
func(4), func(3), func(2) → 3 calls
func(3), func(2), func(1), func(2), func(1), func(0), func(1), func(0), func(-1) → 9 calls
...

For target=5, we might call func() dozens of times!

For target=100, we'd call it millions of times!

This is exponential time complexity: O(coins^target)

Teacher: Can you see how we're solving the same subproblems multiple times (like func(0) gets called repeatedly)?

Me: Yes! I can see it clearly now in the trace. We keep recalculating func(3), func(2), func(1), func(0) over and over!


The Path Forward: Memoization

Teacher: This is where memoization comes in! If we store the result of func(3) the first time we calculate it, we can reuse it instead of recalculating.

Memoization approach:

public int func(int[] coins, int target, int[] memo) {
if (target == 0) return 0;
if (target < 0) return -1;

// Check if we've already calculated this
if (memo[target] != -2) { // Using -2 as "not calculated yet"
return memo[target];
}

int minCoins = Integer.MAX_VALUE;

for (int coin : coins) {
int result = func(coins, target - coin, memo);
if (result != -1) {
minCoins = Math.min(minCoins, 1 + result);
}
}

memo[target] = (minCoins == Integer.MAX_VALUE) ? -1 : minCoins;
return memo[target];
}

Complexity with memoization:

  • Time: O(coins × target) - each subproblem solved once
  • Space: O(target) - for memo array

Huge improvement! From exponential to polynomial time!


Bottom-Up (Tabulation) Approach

Once you understand the recursive pattern, the bottom-up approach becomes clear:

public int coinChange(int[] coins, int amount) {
int[] dp = new int[amount + 1];
Arrays.fill(dp, Integer.MAX_VALUE);
dp[0] = 0; // Base case: 0 coins to make 0

// Build up from 1 to amount
for (int i = 1; i <= amount; i++) {
for (int coin : coins) {
if (coin <= i && dp[i - coin] != Integer.MAX_VALUE) {
dp[i] = Math.min(dp[i], 1 + dp[i - coin]);
}
}
}

return dp[amount] == Integer.MAX_VALUE ? -1 : dp[amount];
}

Trace for coins=[1,2,3], target=5:

dp[0] = 0  (base case)

i=1: Try coins [1,2,3]
coin=1: dp[1] = min(∞, 1 + dp[0]) = min(∞, 1) = 1
dp[1] = 1

i=2: Try coins [1,2,3]
coin=1: dp[2] = min(∞, 1 + dp[1]) = min(∞, 2) = 2
coin=2: dp[2] = min(2, 1 + dp[0]) = min(2, 1) = 1
dp[2] = 1

i=3: Try coins [1,2,3]
coin=1: dp[3] = min(∞, 1 + dp[2]) = min(∞, 2) = 2
coin=2: dp[3] = min(2, 1 + dp[1]) = min(2, 2) = 2
coin=3: dp[3] = min(2, 1 + dp[0]) = min(2, 1) = 1
dp[3] = 1

i=4: Try coins [1,2,3]
coin=1: dp[4] = min(∞, 1 + dp[3]) = min(∞, 2) = 2
coin=2: dp[4] = min(2, 1 + dp[2]) = min(2, 2) = 2
coin=3: dp[4] = min(2, 1 + dp[1]) = min(2, 2) = 2
dp[4] = 2

i=5: Try coins [1,2,3]
coin=1: dp[5] = min(∞, 1 + dp[4]) = min(∞, 3) = 3
coin=2: dp[5] = min(3, 1 + dp[3]) = min(3, 2) = 2
coin=3: dp[5] = min(2, 1 + dp[2]) = min(2, 2) = 2
dp[5] = 2 ✅

Final dp array: [0, 1, 1, 1, 2, 2]


Common Mistakes

❌ Mistake 1: Forgetting the -1 Check

// Wrong: Doesn't check if result is -1
minCoins = Math.min(minCoins, 1 + result);

// Correct: Check first
if (result != -1) {
minCoins = Math.min(minCoins, 1 + result);
}

❌ Mistake 2: Wrong Base Case for target < 0

// Wrong: Returns 0 for negative
if (target <= 0) return 0;

// Correct: Distinguish between 0 and negative
if (target == 0) return 0;
if (target < 0) return -1;

❌ Mistake 3: Using 0 as "Invalid" Marker

// Wrong: 0 is a valid answer (for target=0)
if (minCoins == 0) return -1;

// Correct: Use Integer.MAX_VALUE as invalid marker
if (minCoins == Integer.MAX_VALUE) return -1;

❌ Mistake 4: Not Handling Impossible Cases in DP

// Wrong: Doesn't check if dp[i-coin] is possible
dp[i] = Math.min(dp[i], 1 + dp[i - coin]);

// Correct: Check for validity
if (dp[i - coin] != Integer.MAX_VALUE) {
dp[i] = Math.min(dp[i], 1 + dp[i - coin]);
}

❌ Mistake 5: Greedy Assumption

// Wrong: Assuming greedy works
Arrays.sort(coins); // Sort descending
for (int coin : coins) {
while (target >= coin) {
target -= coin;
count++;
}
}

// This fails for coins=[1,3,4], target=6
// Greedy gives: 4+1+1 = 3 coins
// Optimal is: 3+3 = 2 coins

Key Takeaways

What We Learned
  1. Greedy Doesn't Always Work:

    • Always picking the largest coin can miss optimal solutions
    • Example: coins=[1,3,4], target=6 → greedy gives 3, optimal is 2
  2. Recognizing Overlapping Subproblems:

    • To make amount 5, you can:
      • Use coin 1 + solve for 4
      • Use coin 2 + solve for 3
      • Use coin 3 + solve for 2
    • These subproblems (4, 3, 2) get reused!
  3. Recursive Structure:

    minCoins(target) = min of:
    - 1 + minCoins(target - coin1)
    - 1 + minCoins(target - coin2)
    - ...
  4. Base Cases Matter:

    • target == 0 → return 0 (success!)
    • target < 0 → return -1 (impossible)
    • All coins tried, none worked → return -1
  5. Representing Actions in Code:

    • "Using a coin" = reducing target by coin value
    • func(target - coin) represents the subproblem
  6. Two Forms of "Impossible":

    • Overshot (target < 0): This specific path failed
    • No valid path (minCoins == MAX_VALUE): All paths failed
  7. Optimization Path:

    • Unoptimized recursion: O(coins^target) - exponential
    • Memoization: O(coins × target) - polynomial
    • Tabulation: O(coins × target) - polynomial, no recursion overhead
  8. Building Intuition First:

    • Don't memorize the tabulation pattern
    • First, understand the recursive subproblem structure
    • Then, memoization becomes obvious
    • Finally, tabulation follows naturally

Similar Problems

Try these to solidify your understanding:

  1. Coin Change 2 - Count number of ways (not minimum coins)
  2. Perfect Squares - Minimum squares that sum to n
  3. Climbing Stairs - Ways to climb with different step sizes
  4. Combination Sum IV - Count combinations (order matters)

Tags: #algorithms #dynamic-programming #recursion #coin-change #greedy-vs-dp #learning-journey #interview-prep


The Learning Journey

Teacher: So what did we discover today?

Me: That recognizing overlapping subproblems is the real foundation of DP!

I learned:

  • Greedy doesn't always work (the [1,3,4] → 6 example broke my assumption)
  • How to represent actions in code ("using a coin" = reducing target)
  • The importance of seeing subproblems before jumping to tabulation
  • Two types of "impossible" states and how to handle them
  • That during real interviews, I need to think recursively first, not memorize patterns

Teacher: Exactly! And here's the beautiful part:

Once you see this pattern of trying all options and picking the minimum, you'll recognize it in dozens of other problems:

  • Climbing stairs with variable steps
  • Minimum path sum in a grid
  • Edit distance between strings
  • Longest common subsequence

They all follow the same "try all options, pick the best" recursive structure!

The key is: Always start with the recursive thinking. The DP optimizations will follow naturally once you understand the subproblems.


This conversation happened during a real learning session. Sometimes the best breakthroughs come from understanding why greedy fails and how to recognize the recursive structure that leads to dynamic programming solutions!