π― LC 746: Min Cost Climbing Stairs - First Principles Breakdown
This document walks you through the authentic discovery process - the confusion, false starts, and breakthrough moments that lead to the solution.
We'll start with the raw problem and build understanding from first principles.
1. Discovery & Problem Understandingβ
1.1 Problem Fundamentalsβ
What problem are we REALLY solving?β
Let me start with the raw confusion you might feel:
- "I can start at index 0 OR 1... what does that mean?"
- "When do I pay the cost? Before stepping or after?"
- "What is 'the top'? Is it the last step or beyond it?"
The Aha Moment: Let's trace an example to eliminate ambiguityβ
cost = [10, 15, 20]
Indices: 0 1 2
The array has 3 steps, but "the top of the floor" is position 3 (beyond the array).
Think of it like:
[10] β [15] β [20] β [TOP]
0 1 2 3
You pay the cost when you step ON that stair, then you can jump 1 or 2 steps forward.
1.2 The Discovery Process (Chronological)β
FIRST THOUGHT: "Can I just be greedy? Pick the cheaper step each time?"β
cost = [1, 100, 1, 1, 1, 100, 1, 1, 100, 1]
Greedy says: Start at index 0 (cost=1), jump to index 2 (cost=1), then index 4 (cost=1)...
What if starting at index 1 (cost=100) lets me skip worse costs later?
Greedy fails because future consequences matter.
SECOND THOUGHT: "This feels like a decision tree"β
"At each step, I choose: pay this cost and jump 1, or pay this cost and jump 2."
Start options: step 0 OR step 1
From step 0: can go to step 1 or step 2
From step 1: can go to step 2 or step 3
...
This is an overlapping subproblem structure.
If I reach step 5 from step 3 or step 4, I'll solve "min cost from step 5 to top" multiple times.
THIRD THOUGHT (Key Insight): "What if I think BACKWARDS?"β
Instead of:
"I'm at step X, where can I go?"
Think:
"I want to reach position X, where could I have come from?"
This inversion is the key!
To reach position i, I could have:
- Jumped from position
i-1(and paidcost[i-1]) - Jumped from position
i-2(and paidcost[i-2])
1.3 Prerequisites & Foundationsβ
Prerequisite 1: Optimal Substructureβ
The minimum cost to reach position i depends only on:
- Min cost to reach
i-1+ the cost to step there - Min cost to reach
i-2+ the cost to step there
This property means: solving smaller subproblems optimally gives the optimal solution to larger problems.
Prerequisite 2: The Boundary Condition Insightβ
Why is dp[0] = 0 and dp[1] = 0?
This is subtle!
You START at position 0 or 1 without paying.
You pay cost[0] when you step ON it, then jump away.
So:
- To "be at position 0 ready to jump": cost = 0 (you're already there)
- To "be at position 1 ready to jump": cost = 0 (you're already there)
- To reach position 2: you must have stepped on 0 or 1, so you pay their costs.
This is the "you can start from index 0 or 1" meaning decoded!
2. Implementation & Analysisβ
2.1 Recursive State & Codeβ
Why Recursion First?β
Recursion mirrors how humans think about the problem naturally:
"To find min cost to reach position
i, I need to know min cost to reachi-1andi-2"
"I'll just ask those subproblems! Recursive calls handle it."
The Recursive State Definitionβ
minCost(i) = minimum cost to reach position i
Base cases:
minCost(0) = 0 // Start here free
minCost(1) = 0 // Or start here free
Recursive case:
minCost(i) = min(
minCost(i-1) + cost[i-1], // Came from step i-1, pay cost[i-1]
minCost(i-2) + cost[i-2] // Came from step i-2, pay cost[i-2]
)
cost[i-1] and not cost[i]?Because cost[i-1] is the cost to step on position i-1, and after stepping there, you jump to position i.
The cost array is indexed by the step you're standing on, not the position you're jumping to.
The Unoptimized Recursive Codeβ
class Solution {
public int minCostClimbingStairs(int[] cost) {
int n = cost.length;
// We want to reach position n (top, beyond last step)
return minCost(n, cost);
}
/**
* Returns: minimum cost to reach position i
*
* @param i The target position (can be 0 to n)
* @param cost The cost array (length n)
*/
private int minCost(int i, int[] cost) {
// BASE CASES: starting positions are free
if (i == 0) return 0;
if (i == 1) return 0;
// RECURSIVE CASE: came from i-1 or i-2
int fromPrev = minCost(i - 1, cost) + cost[i - 1];
int fromTwoPrev = minCost(i - 2, cost) + cost[i - 2];
return Math.min(fromPrev, fromTwoPrev);
}
}
- Main function: Asks "what's the min cost to reach position n (the top)?"
- Base cases: Positions 0 and 1 are free starting points
- Recursive logic:
- Try coming from
i-1: get min cost to reachi-1, add cost of stepping oni-1 - Try coming from
i-2: get min cost to reachi-2, add cost of stepping oni-2 - Return the cheaper option
- Try coming from
2.2 Edge Case Validationβ
Edge Case 1: cost = [10, 15]β
Positions:
[10] β [15] β [TOP]
0 1 2
To reach position 2:
Option 1: Start at 0, pay 10, jump 2 steps β cost = 10
Option 2: Start at 1, pay 15, jump 1 step β cost = 15
Answer: 10 β
The recursion evaluates both starting points naturally.
minCost(2) checks both minCost(1) + cost[1] and minCost(0) + cost[0].
Edge Case 2: cost = [1, 100, 1, 1, 1, 100, 1, 1, 100, 1]β
Why does the recursion work here but greedy fails?
At position 5:
- Greedy might force a path through high-cost steps
- Recursion explores all possible paths and picks minimum
Recursion's exhaustive search guarantees optimality (though inefficiently).
It tries every possible combination and naturally finds the best path.
Edge Case 3: Single element cost = [10]β
Positions:
[10] β [TOP]
0 1
To reach position 1 (top):
Option 1: Start at 0, pay 10, jump 1 β cost = 10
Option 2: Start at 1 directly β cost = 0
Answer: 0 β
minCost(1) = 0 base case handles it.
Starting at position 1 means you're already past the only step!
2.3 Complexity & Redundancy Analysisβ
Why is this exponential?β
Let's trace minCost(5):
minCost(5)
βββ minCost(4)
β βββ minCost(3)
β β βββ minCost(2)
β β β βββ minCost(1) β
β β β βββ minCost(0) β
β β βββ minCost(1) β (RECALCULATED!)
β βββ minCost(2)
β βββ minCost(1) β (RECALCULATED AGAIN!)
β βββ minCost(0) β
βββ minCost(3)
βββ minCost(2) (ENTIRE SUBTREE RECALCULATED!)
βββ minCost(1) β
Each call spawns 2 subcalls, leading to O(2^n) time complexity.
The Waste: minCost(2) is calculated multiple times with the same answer every time.
Visualizing the Redundancyβ
Calls to minCost(3): 2 times
Calls to minCost(2): 3 times
Calls to minCost(1): 5 times
Calls to minCost(0): 3 times
For n=10, we might compute minCost(5) dozens of times!
For n=50, the recursion would take centuries to complete!
3. Pattern Recognition & Masteryβ
3.1 Meta-Process & Frameworkβ
The DP Recognition Checklist:β
- β Decision at each step? (Which step to take next)
- β Optimal substructure? (Best way to reach position i depends on best way to reach i-1, i-2)
- β Overlapping subproblems? (Same positions visited multiple times)
This problem perfectly exhibits all three properties of DP problems!
The DP Problem-Solving Framework:β
Step 1: Define the state
What does
dp[i]represent?
Step 2: Find the recurrence
How does
dp[i]relate to smaller subproblems?
Step 3: Identify base cases
What states can be answered directly?
Step 4: Determine evaluation order
Bottom-up or top-down?
Pattern Recognition Tableβ
| Problem | Decision | Optimal Substructure | Overlapping |
|---|---|---|---|
| Climbing Stairs | 1 or 2 steps | Yes - depends on n-1, n-2 | Yes |
| Min Cost Stairs | Which step to use | Yes - depends on i-1, i-2 | Yes |
| Fibonacci | N/A (pure math) | Yes - depends on n-1, n-2 | Yes |
| House Robber | Rob or skip | Yes - depends on i-1, i-2 | Yes |
All these problems have the same recursive structure!
Once you recognize the pattern, you can apply the same solution approach.
3.2 Optimization Preview & Key Takeawaysβ
The recursive version shows what we're computing. But it's wasteful.
Next: We'll add memoization to cache results (top-down DP), then convert to bottom-up iterative DP.
Before I show the optimized versions, can you identify:
- Which subproblems are recalculated?
- What data structure would eliminate redundant work?
This pause is intentionalβyou discovering the optimization is deeper learning than me telling you!
Hint 1: What would you cache?
We need to cache the result of minCost(i) for each i.
Since i ranges from 0 to n, we need an array of size n+1.
Hint 2: When to check the cache?
Before computing - check if we've already solved this subproblem.
After computing - store the result for future lookups.
Key Takeawaysβ
- β Backwards thinking is powerful - Instead of "where can I go?", ask "where did I come from?"
- β Greedy fails when future matters - Need to consider all paths, not just local optima
- β Base cases represent free starting points - dp[0]=0 and dp[1]=0 encode the problem constraint
- β Recursion reveals structure - The pure recursive solution shows the problem's natural structure
- β Exponential waste needs optimization - Overlapping subproblems cry out for memoization
The problem structure is inherently recursive.
The optimization (DP) comes later, but understanding why the recursion works is the foundation for everything else.
3.3 Self-Assessment & Learning Journeyβ
Connection to Other Conceptsβ
This Problem Teaches:
- State space reduction - We only need position, not the path taken
- Optimal substructure - Local optimal choices lead to global optimum
- Overlapping subproblems - Same states visited from different paths
- Base case design - How to encode problem constraints
Applications of This Pattern:
- Pathfinding with costs - Minimum cost path problems
- Resource optimization - Minimize cost/time with constraints
- Game strategy - Optimal play with decision trees
- Financial planning - Minimize costs over time periods
The Learning Journeyβ
1. Try greedy β Fails β
β
2. Recognize it's a decision tree β Getting warmer π‘οΈ
β
3. Think backwards β Breakthrough! π‘
β
4. Write recursion β Correct but slow β°
β
5. Add memoization β Fast! (Next doc)
β
6. Convert to iteration β Optimal! (Next doc)
You now understand why the problem has the structure it does.
The next documents will show you how to optimize this structure, but you've already done the hard part - understanding the problem deeply.
Self-Test Questionsβ
Before moving to optimization, ensure you can answer:
Q1: Why do we need min cost to reach i-1 AND i-2?
Answer
Because from our current position i, we could have arrived from either of these positions!
We need to check both paths and choose the cheaper one. This is the essence of exploring all possibilities.
Q2: What would happen if we only checked i-1?
Answer
We'd miss potentially cheaper paths that skip a step!
Example: cost = [10, 1, 100]
- Only checking i-1: 0β10β1β100 = 111
- Checking both: 0β10β(skip 1)β100 = 110 (better!)
Q3: Why is the recursion tree shaped like Fibonacci?
Answer
Because the recurrence relation is the same!
minCost(i) = min(minCost(i-1) + x, minCost(i-2) + y)
fibonacci(i) = fibonacci(i-1) + fibonacci(i-2)
Both split into 2 subproblems at each step, creating the same exponential tree structure.
You've Completed First Principles!β
You now understand:
- β What the problem is really asking
- β Why greedy fails
- β Why recursion works
- β Where the overlapping subproblems come from
- β Why we need optimization
The optimization journey continues in the next documents:
- Memoization - Add caching to the recursion
- Bottom-Up DP - Eliminate recursion entirely
- Space Optimization - Reduce memory usage
Each builds on this foundation! π