Skip to main content

🏠 House Robber - The Discovery Journey

A first-principles exploration of Dynamic Programming, from greedy attempts to optimal solutions


1. Understanding the Problem & the DP Insight

The Raw Problem

You're a robber on a street with houses [2, 7, 9, 3, 1]. Each number is money in that house.

Constraint: Rob two adjacent houses → alarm triggers.

Question: What's the maximum money you can steal?


Why Greedy Fails

Designer's first thought: "Just take the biggest houses!"

  1. Pick 9 (index 2)
  2. Can't take 7 or 3 (adjacent)
  3. Take 2 and 1
  4. Total: 9 + 2 + 1 = 12

But wait... what if we took 7 + 9 + 1 = 17? That's better!

KEY REALIZATION #1

Greedy doesn't work because local optimal ≠ global optimal

The "best choice right now" might block better choices later.


The Recursive Insight

Designer thinks: "At each house, I have a CHOICE..."

Standing at house i, I can:

  1. Rob it → get money[i] + whatever I got from house i-2 onwards (can't touch i-1)
  2. Skip it → get whatever I got from house i-1 onwards

In math:

max_money(i) = max(
money[i] + max_money(i-2), // rob this house
max_money(i-1) // skip this house
)

Let's trace this on [2, 7, 9, 3, 1]:

max_money(4) = max(
1 + max_money(2),
max_money(3)
)

But to know max_money(3), we need:

max_money(3) = max(
3 + max_money(1),
max_money(2)
)

To know max_money(2), we need:

max_money(2) = max(
9 + max_money(0),
max_money(1)
)

...and so on

KEY REALIZATION #2

This breaks down into OVERLAPPING SUBPROBLEMS

Notice how max_money(2) appears in multiple places? That's the DP opportunity!


Bellman's Core Insight: "Optimal Substructure"

The Principle of Optimality (Bellman, 1957):

"An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision."

Translation to human:

If you know the optimal solution from position i, and you want the optimal solution from position i+1, you don't need to reconsider everything before position i.

The best path to the end from i+1 can be built using the best path from i.


Let's Break This Down With House Robber

Question: What's the max money from house 0 to house 4?

Bellman's approach:

  1. If I knew the max money from houses 0→3, and
  2. If I knew the max money from houses 0→2,
  3. Then I could instantly compute max from 0→4:
max(0→4) = max(
money[4] + max(0→2), // rob house 4
max(0→3) // skip house 4
)
KEY REALIZATION #3

The solution to a problem can be constructed from solutions to smaller subproblems

This is optimal substructure - the foundation of DP.


Why Pure Recursion is Too Slow

The naive recursive approach recalculates the same subproblems exponentially many times:

                    rob(4)
/ \
rob(3) rob(2)
/ \ / \
rob(2) rob(1) rob(1) rob(0)
/ \ / \ / \
rob(1) rob(0) ... ... ...

See how rob(2) appears multiple times? And rob(1)?

For array size n, this branches exponentially: O(2^n) calls!


The DP Solution: "Just Remember What You Calculated"

The insight: If rob(2) is being calculated multiple times, and it always gives the same answer, why recalculate?

Store it.

This is memoization - or we can solve bottom-up.

Time complexity drops from O(2^n) to O(n).


2. Three Approaches to Solution

Approach 1: Pure Recursion (Top-Down, No Memory)

public int rob(int[] houses, int i) {
// Base cases
if (i < 0) {
return 0;
}
if (i == 0) {
return houses[0];
}

// Option 1: Rob this house + max from i-2
int robCurrent = houses[i] + rob(houses, i - 2);

// Option 2: Skip this house, take max from i-1
int skipCurrent = rob(houses, i - 1);

// Return the better option
return Math.max(robCurrent, skipCurrent);
}

// How to call: rob(houses, houses.length - 1);

Characteristics:

  • Starts from the END (house 4) and works backward
  • Recalculates the same subproblems multiple times
  • Time complexity: O(2^n) - Exponential!
  • This is NOT DP - it's just recursion

Let's Trace This on [2, 7, 9, 3, 1]

Call: rob(houses, 4)

i = 4
robCurrent = houses[4] + rob(houses, 2) // 1 + rob(houses, 2)
// Need to calculate rob(houses, 2) first!
skipCurrent = rob(houses, 3) // Need to calculate rob(houses, 3) first!

This recursion tree shows the problem:

rob(4)
├── rob(2) ──┐
│ ├── rob(0) = 2 |
│ └── rob(1) |
│ ├── rob(-1) = 0 |
│ └── rob(0) = 2 ← CALCULATED AGAIN!
│ |
└── rob(3) |
├── rob(1) ←──────┘ CALCULATED AGAIN!
│ ├── rob(-1) = 0
│ └── rob(0) = 2 ← CALCULATED AGAIN!
└── rob(2) ← CALCULATED AGAIN!
├── rob(0) = 2 ← CALCULATED AGAIN!
└── rob(1) ← CALCULATED AGAIN!
  • rob(0) is calculated 5 times!
  • rob(1) is calculated 3 times!
  • rob(2) is calculated 2 times!

For houses.length = 30, you'd calculate rob(0) potentially millions of times!


Approach 2: Memoization (Top-Down DP)

The Core Concept

Memoization = Recursion + Memory

That's it. That's the whole concept.

public int robWithMemo(int[] houses, int i, Map<Integer, Integer> memo) {
// Base cases
if (i < 0) {
return 0;
}
if (i == 0) {
return houses[0];
}

// CHECK: Have we already solved this?
if (memo.containsKey(i)) {
return memo.get(i); // Just return it, don't recalculate!
}

// Option 1: Rob this house + max from i-2
int robCurrent = houses[i] + robWithMemo(houses, i - 2, memo);

// Option 2: Skip this house, take max from i-1
int skipCurrent = robWithMemo(houses, i - 1, memo);

// Calculate the result
int result = Math.max(robCurrent, skipCurrent);

// STORE it for future use
memo.put(i, result);

// Return it
return result;
}

// How to call:
// Map<Integer, Integer> memo = new HashMap<>();
// int result = robWithMemo(houses, houses.length - 1, memo);

Characteristics:

  • Still recursive (top-down)
  • But now REMEMBERS what it calculated
  • Time complexity: O(n) - Each subproblem solved exactly once!
  • This IS DP - specifically "top-down DP"

The ONLY Difference from Pure Recursion

Key Point

Two lines of code turn recursion into DP:

  1. Check if result exists in memo
  2. Store result in memo before returning

That's literally it.

Pure Recursion:

public int rob(int[] houses, int i) {
if (i < 0) return 0;
if (i == 0) return houses[0];

int robCurrent = houses[i] + rob(houses, i - 2);
int skipCurrent = rob(houses, i - 1);
return Math.max(robCurrent, skipCurrent);
}

Memoization (Recursion + Memo):

public int robWithMemo(int[] houses, int i, Map<Integer, Integer> memo) {
if (i < 0) return 0;
if (i == 0) return houses[0];

if (memo.containsKey(i)) return memo.get(i); // ← ADDED THIS

int robCurrent = houses[i] + robWithMemo(houses, i - 2, memo);
int skipCurrent = robWithMemo(houses, i - 1, memo);

int result = Math.max(robCurrent, skipCurrent);
memo.put(i, result); // ← ADDED THIS

return result;
}

Approach 3: Tabulation (Bottom-Up DP) - The "Pure DP"

Designer realizes: "If I'm going to solve all subproblems anyway, why use recursion? Why not solve them in ORDER?"

public int robDP(int[] houses) {
if (houses.length == 0) return 0;
if (houses.length == 1) return houses[0];

// Create a table
int[] dp = new int[houses.length];

// Base cases
dp[0] = houses[0];
dp[1] = Math.max(houses[0], houses[1]);

// Fill table from BOTTOM to TOP (left to right)
for (int i = 2; i < houses.length; i++) {
dp[i] = Math.max(
houses[i] + dp[i-2], // rob this house
dp[i-1] // skip this house
);
}

return dp[houses.length - 1];
}

Characteristics:

  • NO recursion at all - just a loop
  • Starts from the BEGINNING (house 0) and works forward
  • Fills a table systematically
  • Time complexity: O(n)
  • This IS DP - specifically "bottom-up DP"
  • This is what Bellman originally described!

The Bottom-Up Trace on [2, 7, 9, 3, 1]

houses = [2, 7, 9, 3, 1]

# Start small:
dp[0] = 2 # only house 0, rob it
dp[1] = max(2, 7) = 7 # either house 0 or 1

# Build up:
dp[2] = max(9+dp[0], dp[1]) = max(9+2, 7) = 11
dp[3] = max(3+dp[1], dp[2]) = max(3+7, 11) = 11
dp[4] = max(1+dp[2], dp[3]) = max(1+11, 11) = 12
KEY REALIZATION #4

You can solve subproblems in a specific order (usually smallest to largest) and build up to the answer

This is tabulation - the table-based DP.


Comparing the Three Approaches

AspectPure RecursionMemoization (Top-Down DP)Tabulation (Bottom-Up DP)
DirectionTop-down (end to start)Top-down with cachingBottom-up (start to end)
ImplementationRecursive functionRecursive function + HashMapIterative loop + array
Time ComplexityO(2^n)O(n)O(n)
Space ComplexityO(n) call stackO(n) memo + O(n) stackO(n) array only
When to useNever (too slow)When recursion feels naturalWhen solving all subproblems anyway
Bellman's original✅ This is "classic DP"

Work Comparison on [2, 7, 9, 3, 1]

Pure Recursion:

  • rob(0) calculated: 5 times
  • rob(1) calculated: 3 times
  • rob(2) calculated: 2 times
  • rob(3) calculated: 1 time
  • rob(4) calculated: 1 time
  • Total function calls: 12

Memoization:

  • rob(0) calculated: 1 time (then reused)
  • rob(1) calculated: 1 time (then reused)
  • rob(2) calculated: 1 time (then reused)
  • rob(3) calculated: 1 time (then reused)
  • rob(4) calculated: 1 time
  • Total function calls: 5

Tabulation:

  • dp[0] calculated: 1 time
  • dp[1] calculated: 1 time
  • dp[2] calculated: 1 time
  • dp[3] calculated: 1 time
  • dp[4] calculated: 1 time
  • Total iterations: 5

3. Complete Implementation & Key Insights

Edge Cases as Concept Validators

Let's test our understanding with edge cases:

Edge Case 1: [5] (one house)

What should happen? Rob it, get 5

Does DP handle this? Yes, base case: dp[0] = houses[0]


Edge Case 2: [5, 1] (two houses)

What should happen? Rob house 0, get 5

Does DP handle this? dp[1] = max(5, 1) = 5


Edge Case 3: [5, 10, 5]

Naive thinking: Rob middle (10)

Correct: Rob first + last (5+5=10)

DP: dp[2] = max(5+5, 10) = 10

Why it works: The recurrence automatically considers both options


Edge Case 4: [2, 1, 1, 2]

Greedy fails: Takes 2, then 2 = 4

Optimal: 2 + 1 + 2 = 5 (skip house 1, rob 0,2,3)

Wait, can you rob house 0 and 3? YES! They're not adjacent

DP finds this: dp[3] = max(2+dp[1], dp[2]) explores this path ✅


Complete Code Reference

// ========================================
// APPROACH 1: Pure Recursion (Slow - O(2^n))
// ========================================
public int rob(int[] houses, int i) {
if (i < 0) return 0;
if (i == 0) return houses[0];

int robCurrent = houses[i] + rob(houses, i - 2);
int skipCurrent = rob(houses, i - 1);
return Math.max(robCurrent, skipCurrent);
}


// ========================================
// APPROACH 2: Top-Down DP (Memoization - O(n))
// ========================================
public int robMemo(int[] houses, int i, Map<Integer, Integer> memo) {
if (i < 0) return 0;
if (i == 0) return houses[0];
if (memo.containsKey(i)) return memo.get(i);

int robCurrent = houses[i] + robMemo(houses, i - 2, memo);
int skipCurrent = robMemo(houses, i - 1, memo);
int result = Math.max(robCurrent, skipCurrent);

memo.put(i, result);
return result;
}


// ========================================
// APPROACH 3: Bottom-Up DP (Tabulation - O(n))
// ========================================
public int robDP(int[] houses) {
if (houses.length == 0) return 0;
if (houses.length == 1) return houses[0];

int[] dp = new int[houses.length];
dp[0] = houses[0];
dp[1] = Math.max(houses[0], houses[1]);

for (int i = 2; i < houses.length; i++) {
dp[i] = Math.max(houses[i] + dp[i - 2], dp[i - 1]);
}

return dp[houses.length - 1];
}


// ========================================
// SPACE-OPTIMIZED: O(1) Space
// ========================================
public int robOptimized(int[] houses) {
if (houses.length == 0) return 0;
if (houses.length == 1) return houses[0];

int prev2 = houses[0];
int prev1 = Math.max(houses[0], houses[1]);

for (int i = 2; i < houses.length; i++) {
int current = Math.max(houses[i] + prev2, prev1);
prev2 = prev1;
prev1 = current;
}

return prev1;
}

Bellman's Mental Checklist for DP

When approaching a new problem, ask:

Step 1: Can this be broken into decisions at stages?

  • House robber: At each house, decide rob/skip ✅
  • Fibonacci: At each position, sum of previous two ✅
  • Knapsack: At each item, decide include/exclude ✅

Step 2: Does each decision affect future options?

  • House robber: Robbing house i means can't rob i+1 ✅
  • Stock prices: Buying stock means can't buy again until sell ✅

Step 3: Can you write the solution as a recurrence relation?

f(i) = some_function(f(i-1), f(i-2), ..., current_choice)

For House Robber: dp[i] = max(dp[i-1], dp[i-2] + houses[i])

Step 4: Are subproblems reused?

  • If yes → DP is worth it ✅
  • If no → regular recursion is fine

Step 5: What's the order of dependencies?

  • House robber: dp[i] depends on dp[i-1] and dp[i-2]
  • So solve left to right

Why "Dynamic Programming"? (The Name's Origin)

Fun Fact

Bellman chose the name to impress his bosses who didn't like mathematics.

His own words:

"I spent the Fall quarter (of 1950) at RAND. My first task was to find a name... I wanted to get across the idea that this was dynamic... multistage... I thought dynamic programming was a good name. It was something not even a Congressman could object to."

The word "programming" here means "planning" (like a military program), not coding!

"Dynamic" refers to the time-varying aspect (multi-stage decisions).


Key Takeaways

Core Insights

Dynamic Programming isn't magic - it's recognizing that:

  1. Problems can be broken into stages with choices
  2. Each choice affects future options
  3. Optimal solutions can be built from optimal subproblems
  4. We keep recalculating the same things

Two ways to implement DP:

  1. Top-down (Memoization): Recursion + cache = 2 lines of code
  2. Bottom-up (Tabulation): Loop + table = Bellman's original

The meta-skill: Pattern recognition - when you see overlapping subproblems and optimal substructure, think DP.


What Experienced Engineers See Instantly

  1. Problem has stages/positions
  2. Each stage has choices
  3. Choices affect future stages
  4. Want optimal cumulative result
  5. Same subproblems get recalculated ← This is the trigger

Once you see these patterns, your brain automatically thinks: "This needs DP."


The Journey We Took

  1. Started with greedy (failed)
  2. Discovered recursive structure
  3. Realized overlapping subproblems
  4. Added memoization (top-down DP)
  5. Converted to tabulation (bottom-up DP)
  6. Understood Bellman's original thinking

This is the messy discovery journey - not the polished solution you find in textbooks.


Final Thought

The Core Philosophy

Dynamic Programming is not about memorizing patterns. It's about seeing the structure of decisions over time, recognizing when you're solving the same subproblems repeatedly, and being clever enough to remember the answers.

Once you internalize this way of thinking, DP problems stop being mysterious and start being obvious.

You now think like an engineer who understands DP from first principles. 🎯