Skip to main content

🎯 LC 746: Min Cost Climbing Stairs - First Principles Breakdown

The Designer's Journey

This document walks you through the authentic discovery process - the confusion, false starts, and breakthrough moments that lead to the solution.

We'll start with the raw problem and build understanding from first principles.


1. Discovery & Problem Understanding​

1.1 Problem Fundamentals​

What problem are we REALLY solving?​

Let me start with the raw confusion you might feel:

Initial Confusion
  • "I can start at index 0 OR 1... what does that mean?"
  • "When do I pay the cost? Before stepping or after?"
  • "What is 'the top'? Is it the last step or beyond it?"

The Aha Moment: Let's trace an example to eliminate ambiguity​

cost = [10, 15, 20]
Indices: 0 1 2

The array has 3 steps, but "the top of the floor" is position 3 (beyond the array).

Think of it like:

[10] β†’ [15] β†’ [20] β†’ [TOP]
0 1 2 3
The Key Insight

You pay the cost when you step ON that stair, then you can jump 1 or 2 steps forward.

1.2 The Discovery Process (Chronological)​

FIRST THOUGHT: "Can I just be greedy? Pick the cheaper step each time?"​

cost = [1, 100, 1, 1, 1, 100, 1, 1, 100, 1]

Greedy says: Start at index 0 (cost=1), jump to index 2 (cost=1), then index 4 (cost=1)...

But Wait!

What if starting at index 1 (cost=100) lets me skip worse costs later?

Greedy fails because future consequences matter.

SECOND THOUGHT: "This feels like a decision tree"​

"At each step, I choose: pay this cost and jump 1, or pay this cost and jump 2."

Start options: step 0 OR step 1
From step 0: can go to step 1 or step 2
From step 1: can go to step 2 or step 3
...
Discovery

This is an overlapping subproblem structure.

If I reach step 5 from step 3 or step 4, I'll solve "min cost from step 5 to top" multiple times.

THIRD THOUGHT (Key Insight): "What if I think BACKWARDS?"​

Instead of:

"I'm at step X, where can I go?"

Think:

"I want to reach position X, where could I have come from?"

The Breakthrough! πŸ’‘

This inversion is the key!

To reach position i, I could have:

  1. Jumped from position i-1 (and paid cost[i-1])
  2. Jumped from position i-2 (and paid cost[i-2])

1.3 Prerequisites & Foundations​

Prerequisite 1: Optimal Substructure​

Property

The minimum cost to reach position i depends only on:

  • Min cost to reach i-1 + the cost to step there
  • Min cost to reach i-2 + the cost to step there

This property means: solving smaller subproblems optimally gives the optimal solution to larger problems.

Prerequisite 2: The Boundary Condition Insight​

Why is dp[0] = 0 and dp[1] = 0?

This is subtle!

The Critical Understanding

You START at position 0 or 1 without paying.

You pay cost[0] when you step ON it, then jump away.

So:

  • To "be at position 0 ready to jump": cost = 0 (you're already there)
  • To "be at position 1 ready to jump": cost = 0 (you're already there)
  • To reach position 2: you must have stepped on 0 or 1, so you pay their costs.

This is the "you can start from index 0 or 1" meaning decoded!


2. Implementation & Analysis​

2.1 Recursive State & Code​

Why Recursion First?​

Recursion mirrors how humans think about the problem naturally:

"To find min cost to reach position i, I need to know min cost to reach i-1 and i-2"

"I'll just ask those subproblems! Recursive calls handle it."

The Recursive State Definition​

minCost(i) = minimum cost to reach position i

Base cases:

minCost(0) = 0  // Start here free
minCost(1) = 0 // Or start here free

Recursive case:

minCost(i) = min(
minCost(i-1) + cost[i-1], // Came from step i-1, pay cost[i-1]
minCost(i-2) + cost[i-2] // Came from step i-2, pay cost[i-2]
)
Why cost[i-1] and not cost[i]?

Because cost[i-1] is the cost to step on position i-1, and after stepping there, you jump to position i.

The cost array is indexed by the step you're standing on, not the position you're jumping to.

The Unoptimized Recursive Code​

class Solution {
public int minCostClimbingStairs(int[] cost) {
int n = cost.length;
// We want to reach position n (top, beyond last step)
return minCost(n, cost);
}

/**
* Returns: minimum cost to reach position i
*
* @param i The target position (can be 0 to n)
* @param cost The cost array (length n)
*/
private int minCost(int i, int[] cost) {
// BASE CASES: starting positions are free
if (i == 0) return 0;
if (i == 1) return 0;

// RECURSIVE CASE: came from i-1 or i-2
int fromPrev = minCost(i - 1, cost) + cost[i - 1];
int fromTwoPrev = minCost(i - 2, cost) + cost[i - 2];

return Math.min(fromPrev, fromTwoPrev);
}
}
Reading the Code
  1. Main function: Asks "what's the min cost to reach position n (the top)?"
  2. Base cases: Positions 0 and 1 are free starting points
  3. Recursive logic:
    • Try coming from i-1: get min cost to reach i-1, add cost of stepping on i-1
    • Try coming from i-2: get min cost to reach i-2, add cost of stepping on i-2
    • Return the cheaper option

2.2 Edge Case Validation​

Edge Case 1: cost = [10, 15]​

Positions:

[10] β†’ [15] β†’ [TOP]
0 1 2

To reach position 2:

Option 1: Start at 0, pay 10, jump 2 steps β†’ cost = 10

Option 2: Start at 1, pay 15, jump 1 step β†’ cost = 15

Answer: 10 βœ“

Validation

The recursion evaluates both starting points naturally.

minCost(2) checks both minCost(1) + cost[1] and minCost(0) + cost[0].

Edge Case 2: cost = [1, 100, 1, 1, 1, 100, 1, 1, 100, 1]​

Why does the recursion work here but greedy fails?

At position 5:

  • Greedy might force a path through high-cost steps
  • Recursion explores all possible paths and picks minimum
Validation

Recursion's exhaustive search guarantees optimality (though inefficiently).

It tries every possible combination and naturally finds the best path.

Edge Case 3: Single element cost = [10]​

Positions:

[10] β†’ [TOP]
0 1

To reach position 1 (top):

Option 1: Start at 0, pay 10, jump 1 β†’ cost = 10

Option 2: Start at 1 directly β†’ cost = 0

Answer: 0 βœ“

Why Does This Work?

minCost(1) = 0 base case handles it.

Starting at position 1 means you're already past the only step!

2.3 Complexity & Redundancy Analysis​

Why is this exponential?​

Let's trace minCost(5):

minCost(5)
β”œβ”€β”€ minCost(4)
β”‚ β”œβ”€β”€ minCost(3)
β”‚ β”‚ β”œβ”€β”€ minCost(2)
β”‚ β”‚ β”‚ β”œβ”€β”€ minCost(1) βœ“
β”‚ β”‚ β”‚ └── minCost(0) βœ“
β”‚ β”‚ └── minCost(1) βœ“ (RECALCULATED!)
β”‚ └── minCost(2)
β”‚ β”œβ”€β”€ minCost(1) βœ“ (RECALCULATED AGAIN!)
β”‚ └── minCost(0) βœ“
└── minCost(3)
β”œβ”€β”€ minCost(2) (ENTIRE SUBTREE RECALCULATED!)
└── minCost(1) βœ“
The Pattern

Each call spawns 2 subcalls, leading to O(2^n) time complexity.

The Waste: minCost(2) is calculated multiple times with the same answer every time.

Visualizing the Redundancy​

Calls to minCost(3): 2 times
Calls to minCost(2): 3 times
Calls to minCost(1): 5 times
Calls to minCost(0): 3 times
The Exponential Growth

For n=10, we might compute minCost(5) dozens of times!

For n=50, the recursion would take centuries to complete!


3. Pattern Recognition & Mastery​

3.1 Meta-Process & Framework​

The DP Recognition Checklist:​

  1. βœ… Decision at each step? (Which step to take next)
  2. βœ… Optimal substructure? (Best way to reach position i depends on best way to reach i-1, i-2)
  3. βœ… Overlapping subproblems? (Same positions visited multiple times)
If All Three β†’ Dynamic Programming Candidate

This problem perfectly exhibits all three properties of DP problems!

The DP Problem-Solving Framework:​

Step 1: Define the state

What does dp[i] represent?

Step 2: Find the recurrence

How does dp[i] relate to smaller subproblems?

Step 3: Identify base cases

What states can be answered directly?

Step 4: Determine evaluation order

Bottom-up or top-down?

Pattern Recognition Table​

ProblemDecisionOptimal SubstructureOverlapping
Climbing Stairs1 or 2 stepsYes - depends on n-1, n-2Yes
Min Cost StairsWhich step to useYes - depends on i-1, i-2Yes
FibonacciN/A (pure math)Yes - depends on n-1, n-2Yes
House RobberRob or skipYes - depends on i-1, i-2Yes
See The Pattern?

All these problems have the same recursive structure!

Once you recognize the pattern, you can apply the same solution approach.

3.2 Optimization Preview & Key Takeaways​

The recursive version shows what we're computing. But it's wasteful.

Next: We'll add memoization to cache results (top-down DP), then convert to bottom-up iterative DP.

Question for You

Before I show the optimized versions, can you identify:

  1. Which subproblems are recalculated?
  2. What data structure would eliminate redundant work?

This pause is intentionalβ€”you discovering the optimization is deeper learning than me telling you!

Hint 1: What would you cache?
Answer

We need to cache the result of minCost(i) for each i.

Since i ranges from 0 to n, we need an array of size n+1.

Hint 2: When to check the cache?
Answer

Before computing - check if we've already solved this subproblem.

After computing - store the result for future lookups.

Key Takeaways​

  1. βœ… Backwards thinking is powerful - Instead of "where can I go?", ask "where did I come from?"
  2. βœ… Greedy fails when future matters - Need to consider all paths, not just local optima
  3. βœ… Base cases represent free starting points - dp[0]=0 and dp[1]=0 encode the problem constraint
  4. βœ… Recursion reveals structure - The pure recursive solution shows the problem's natural structure
  5. βœ… Exponential waste needs optimization - Overlapping subproblems cry out for memoization
The Core Insight

The problem structure is inherently recursive.

The optimization (DP) comes later, but understanding why the recursion works is the foundation for everything else.

3.3 Self-Assessment & Learning Journey​

Connection to Other Concepts​

This Problem Teaches:

  • State space reduction - We only need position, not the path taken
  • Optimal substructure - Local optimal choices lead to global optimum
  • Overlapping subproblems - Same states visited from different paths
  • Base case design - How to encode problem constraints

Applications of This Pattern:

  • Pathfinding with costs - Minimum cost path problems
  • Resource optimization - Minimize cost/time with constraints
  • Game strategy - Optimal play with decision trees
  • Financial planning - Minimize costs over time periods

The Learning Journey​

1. Try greedy β†’ Fails ❌
↓
2. Recognize it's a decision tree β†’ Getting warmer 🌑️
↓
3. Think backwards β†’ Breakthrough! πŸ’‘
↓
4. Write recursion β†’ Correct but slow ⏰
↓
5. Add memoization β†’ Fast! (Next doc)
↓
6. Convert to iteration β†’ Optimal! (Next doc)
You Are Here

You now understand why the problem has the structure it does.

The next documents will show you how to optimize this structure, but you've already done the hard part - understanding the problem deeply.

Self-Test Questions​

Before moving to optimization, ensure you can answer:

Q1: Why do we need min cost to reach i-1 AND i-2?

Answer
info

Because from our current position i, we could have arrived from either of these positions!

We need to check both paths and choose the cheaper one. This is the essence of exploring all possibilities.

Q2: What would happen if we only checked i-1?

Answer
danger

We'd miss potentially cheaper paths that skip a step!

Example: cost = [10, 1, 100]

  • Only checking i-1: 0β†’10β†’1β†’100 = 111
  • Checking both: 0β†’10β†’(skip 1)β†’100 = 110 (better!)

Q3: Why is the recursion tree shaped like Fibonacci?

Answer
tip

Because the recurrence relation is the same!

minCost(i) = min(minCost(i-1) + x, minCost(i-2) + y)
fibonacci(i) = fibonacci(i-1) + fibonacci(i-2)

Both split into 2 subproblems at each step, creating the same exponential tree structure.

You've Completed First Principles!​

You now understand:

  • βœ… What the problem is really asking
  • βœ… Why greedy fails
  • βœ… Why recursion works
  • βœ… Where the overlapping subproblems come from
  • βœ… Why we need optimization
Ready for the Next Level?

The optimization journey continues in the next documents:

  1. Memoization - Add caching to the recursion
  2. Bottom-Up DP - Eliminate recursion entirely
  3. Space Optimization - Reduce memory usage

Each builds on this foundation! πŸš€