⚖️ Partition Equal Subset Sum: The Designer's Discovery Journey
From vague partitioning to concrete subset sum - how problem transformation unlocks the solution
Partition Equal Subset Sum
Given an integer array nums, return true if you can partition the array into two subsets such that the sum of elements in both subsets is equal.
Example:
Input: nums = [1, 5, 11, 5]
Output: true
Explanation: Can partition into [1, 5, 5] and [11]
Both subsets sum to 11
Constraints:
- 1 ≤ nums.length ≤ 200
- 1 ≤ nums[i] ≤ 100
Section 1: Discovery & Problem Understanding
1.1 Problem Analysis & Initial Thinking
The Designer's Discovery Journey
Let me show you how a designer would stumble toward the key insight by playing with examples and making messy realizations.
The First Messy Realization: What is this problem actually asking?
When you first read "partition into two equal subsets," your brain might jump to complicated partitioning algorithms. Arrays, splits, combinations... it feels overwhelming.
But let me show you how the designer simplifies by playing with the example:
Given: [1, 5, 11, 5]
The designer calculates: Total sum = 22
Now the crucial question: "If I split this into two equal subsets, what must each subset sum to?"
Answer: 22 ÷ 2 = 11
Suddenly the problem transforms:
Instead of thinking about "two equal subsets," you're thinking:
"Can I find a subset that sums to half the total?"
This realization is huge because it converts a vague partitioning problem into something concrete: the subset sum problem.
Why does this work?
Once you find one subset that sums to 11, the remaining elements automatically form the other subset with sum 22 - 11 = 11. You only need to solve for one subset, not worry about both simultaneously!
The designer's breakthrough: You don't need to track both subsets. Find one that equals half, and the other takes care of itself!
1.2 Thought Process & Strategy
The Second Insight: When is it impossible from the start?
Before even attempting a solution, the designer asks:
"When can I immediately say 'no'?"
If the total sum is odd, you cannot possibly split it into two equal integer parts.
Example: nums = [1, 2, 3] → sum = 6 → target = 3 ✓ Possible
Example: nums = [1, 2, 4] → sum = 7 → ❌ Cannot divide 7 equally!
This gives you an instant early-exit condition.
The designer learns to always look for these mathematical impossibilities first because they save enormous amounts of computation.
The Third Realization: This feels familiar...
Now you have a clearer problem:
Given: An array of numbers and a target sum (half the total) Question: Can you select a subset that hits exactly that target?
The designer's brain pattern-matches:
This is structurally identical to the 0/1 knapsack problem!
The twist:
- In knapsack: weights and values are different
- Here: each number serves as both its weight and its value
The question changes:
- Knapsack: "What's the maximum value?"
- This problem: "Can I achieve exactly this sum?"
The recursive structure emerges the same way as knapsack:
For each number in the array, you face two futures:
- Include this number → try to reach
(target - nums[i])with remaining numbers - Exclude this number → try to reach
targetwith remaining numbers
If either future succeeds, you've found your partition!
The Fourth Insight: We're asking a yes/no question, not maximizing
The designer recognizes a crucial difference:
Classic Knapsack:
- Question: "What's the maximum value I can achieve?"
- DP stores: integers (maximum values)
Partition Equal Subset Sum:
- Question: "Can I achieve exactly this sum?"
- DP stores: booleans (is this sum achievable?)
The data type change reflects the change in what you're computing!
1.3 Prerequisites & Core Insights
The Prerequisite Insight You Might Be Missing
What makes the transformation "two subsets → one subset" valid?
Key property: Array elements partition the total sum completely.
If total sum is S and you find a subset with sum S/2:
- Your subset: sums to
S/2 - Remaining elements: sum to
S - S/2 = S/2 - No overlap, no elements left out!
This seems obvious once stated, but it's the foundation that makes the entire approach work.
Why doesn't order matter?
Another prerequisite: Order of elements doesn't matter for subset sum problems.
Whether you consider [1, 5, 11, 5] or [11, 5, 1, 5], the possible subset sums are identical.
This property allows you to think about the problem as sequential decisions (consider each element once) rather than combinatorial selection (which subsets exist?).
The Coder's Brain: Translating to Implementation
What defines each subproblem?
Following the same reasoning as knapsack, the coder identifies two parameters:
- Which numbers are we still considering? → index
i(numbers from i onwards) - What target sum are we trying to achieve? →
sum
dp[i][sum] = true if we can achieve exactly sum
using nums[i] onwards, false otherwise
Why does the data structure need these exact dimensions?
- Index
i: ranges from 0 to n (need base case when all numbers considered) - Sum: ranges from 0 to target (all possible remaining targets)
This gives a table of size (n+1) × (target+1)
2D boolean array because:
- Each cell answers one yes/no question
- Two dimensions vary independently
- Bounded ranges make array more efficient than dictionary
What are the base cases, and why?
Base Case 1: All numbers considered (i == n)
- If remaining target is 0 → ✓ true (successfully built the subset!)
- If remaining target is > 0 → ❌ false (failed to hit target)
Base Case 2: Target becomes 0 early
- If remaining target is 0 at any point → ✓ true (found valid subset!)
- Provides an early exit optimization
This reveals a critical initialization: we can always achieve sum 0 by taking nothing, so dp[i][0] = true for all i.
What's the recurrence relation?
For each number at index i, you have two choices:
Choice 1: Skip this number
- Subproblem: Can you achieve the same target with remaining numbers?
- Expression:
dp[i+1][sum]
Choice 2: Include this number
- Subproblem: Can you achieve reduced target
(sum - nums[i])with remaining numbers? - Expression:
dp[i+1][sum - nums[i]] - Boundary: Only valid if
nums[i] ≤ sum
The final recurrence:
dp[i][sum] = skip || (nums[i] <= sum && take)
This is a logical OR operation - you only need one way to succeed!
You can only include nums[i] if it doesn't exceed your current target. Otherwise, including would make target negative which is meaningless, so you can only skip.
Section 2: Implementation & Deep Understanding
2.1 Solution Implementation
This section presents three progressively optimized solutions that represent the natural evolution of thought from brute force to dynamic programming.
Approach 1: Recursive Unoptimized (Brute Force)
This is the first thing that comes to mind - a direct translation of the subset sum insight.
public class PartitionRecursive {
public boolean canPartition(int[] nums) {
int totalSum = 0;
for (int num : nums) {
totalSum += num;
}
// Early exit: odd sum cannot be split equally
if (totalSum % 2 != 0) {
return false;
}
int target = totalSum / 2;
return canSum(nums, 0, target);
}
private boolean canSum(int[] nums, int i, int s) {
// Base case 1: Found valid subset
if (s == 0) {
return true;
}
// Base case 2: No elements left or target became negative
if (i >= nums.length || s < 0) {
return false;
}
// Option 1: Include current number
boolean include = canSum(nums, i + 1, s - nums[i]);
// Option 2: Exclude current number
boolean exclude = canSum(nums, i + 1, s);
// Succeed if either option works
return include || exclude;
}
}
Problem: We solve the same subproblems repeatedly!
Example with 4 numbers [1, 2, 3, 4], target = 5:
- Path 1: exclude 1, include 2 → subproblem: can we make 3 from [3, 4]?
- Path 2: include 1, exclude 2, include 3 → subproblem: can we make 1 from [4]?
- Path 3: exclude 1, exclude 2 → subproblem: can we make 5 from [3, 4]?
If we explore all paths, we might solve "can we make X from [3, 4]" multiple times.
Result: Exponential time complexity O(2^n) - too slow for n > 20
Approach 2: Recursive Optimized (Top-Down with Memoization)
The critical optimization: Remember solutions to avoid recomputing them.
public class PartitionMemoized {
private Boolean[][] memo;
public boolean canPartition(int[] nums) {
int totalSum = 0;
for (int num : nums) {
totalSum += num;
}
// Early exit: odd sum cannot be split equally
if (totalSum % 2 != 0) {
return false;
}
int target = totalSum / 2;
int n = nums.length;
// Initialize memoization table
// Use Boolean (object) to distinguish uncomputed (null) from false
memo = new Boolean[n][target + 1];
return canSum(nums, 0, target);
}
private boolean canSum(int[] nums, int i, int s) {
// Base case 1: Found valid subset
if (s == 0) {
return true;
}
// Base case 2: No elements left or target became negative
if (i >= nums.length || s < 0) {
return false;
}
// Check memoization
if (memo[i][s] != null) {
return memo[i][s];
}
// Option 1: Include current number
boolean include = canSum(nums, i + 1, s - nums[i]);
// Option 2: Exclude current number
boolean exclude = canSum(nums, i + 1, s);
// Store and return result
memo[i][s] = include || exclude;
return memo[i][s];
}
}
What changed?
- Added a memoization structure (Map in JS, Boolean[][] in Java)
- Before computing, check if we've seen this subproblem
- After computing, store the result
Impact:
- Time complexity: O(2^n) → O(n × sum)
- Each unique subproblem is solved exactly once
- Subsequent calls return cached results in O(1)
Why this works:
- Parameter
iranges from 0 to n-1 (n values) - Parameter
sranges from 0 to target (target+1 values) - Total unique subproblems: n × (target+1)
Approach 3: Bottom-Up Dynamic Programming (Iterative)
The final evolution: Eliminate recursion entirely by computing iteratively.
public class PartitionBottomUp {
public boolean canPartition(int[] nums) {
int totalSum = 0;
for (int num : nums) {
totalSum += num;
}
// Mathematical impossibility: odd sum cannot be split equally
if (totalSum % 2 != 0) {
return false;
}
int target = totalSum / 2;
int n = nums.length;
// dp[i][s] = can we achieve sum s using nums from index i to end?
boolean[][] dp = new boolean[n + 1][target + 1];
// Base case: We can always achieve sum 0 by selecting nothing
for (int i = 0; i <= n; i++) {
dp[i][0] = true;
}
// Fill table backwards because dp[i] depends on dp[i+1]
for (int i = n - 1; i >= 0; i--) {
for (int s = 1; s <= target; s++) {
// Option 1: Skip nums[i]
boolean skip = dp[i + 1][s];
// Option 2: Include nums[i] (only if valid)
boolean take = false;
if (nums[i] <= s) {
take = dp[i + 1][s - nums[i]];
}
// We succeed if either option works
dp[i][s] = skip || take;
}
}
// Answer: Can we achieve target using all elements?
return dp[0][target];
}
}
What changed?
- Replaced recursion with iteration
- No call stack overhead
- More predictable memory access patterns (cache-friendly)
- Easier to optimize space (can use only 2 rows or even 1 row)
Same time complexity O(n × sum), but:
- No recursion overhead
- Better cache performance
- Easier to optimize further
Complexity Comparison
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Recursive Brute Force | O(2^n) | O(n) | ❌ Never in production - educational only |
| Top-Down Memoization | O(n × sum) | O(n × sum) + O(n) | ✓ When recursion is more intuitive |
| Bottom-Up Tabulation | O(n × sum) | O(n × sum) | ✓ Production code - most efficient |
2.2 Key Concepts & Mechanics
Understanding the Key Transformation
Can you visualize what dp[i + 1][s - nums[i]] represents?
- You've decided to include
nums[i] - This "consumes"
nums[i]from your target - Remaining problem: achieve
(s - nums[i])with subsequent elements i+1represents moving to the next element
This is the mechanical representation of the designer's insight about "reduced subproblems"!
Why Boolean OR Instead of Max?
Question: Why do we use skip || take instead of Math.max(skip, take)?
Answer: The question type determines the operation!
Knapsack (optimization):
- Question: "What's the maximum value?"
- Operation:
Math.max(skip, take) - Type: Integer comparison
Subset Sum (existence):
- Question: "Can we achieve this sum?"
- Operation:
skip || take - Type: Boolean OR
We only need ONE way to succeed, not the best way!
Space Optimization
Does it feel odd that we're computing answers for all possible sums from 0 to target when we only care about dp[0][target]?
That tension reveals optimizations:
- Memoization with Map - only compute states you actually visit (top-down advantage)
- 1D DP array - only need previous row to compute current row
- Rolling array - use two rows alternately
But the table-filling approach gives you a complete picture of what's possible at each stage, which is enlightening for understanding!
Space Complexity: O(sum) instead of O(n × sum) can be achieved using a 1D array by processing from right to left to avoid overwriting needed values.
2.3 Edge Cases & Validation
Edge Cases as Concept Validators
Let's test boundaries of understanding:
Edge Case 1: Array has one element
Scenario: nums = [5]
Analysis:
- Total sum = 5 (odd)
- Early check catches this → return false ✓
What if: nums = [4]
- Total = 4, target = 2
- Cannot make 2 from a single element 4
dp[1][2]= false (no elements left, target still 2)dp[0][2]only true ifnums[0] == 2- Correctly returns false ✓
Validates that base cases handle minimal arrays correctly.
Edge Case 2: Target is zero
Scenario: nums = [0, 0]
Analysis:
- Total sum = 0, target = 0
- Can you make subset summing to 0? Yes, the empty subset!
Base case dp[i][0] should always be true because you can achieve zero sum by taking nothing.
This wasn't obvious from the problem statement but emerges from thinking about what zero target means!
Edge Case 3: Numbers larger than target
Scenario: nums = [10, 10, 10, 7, 7, 7, 7, 7, 5, 5, 5, 5, 5], target = 50
Analysis:
- When encountering 10: include it (need 40 from remaining) or skip it
- Condition
nums[i] <= sumcorrectly prevents including numbers exceeding target - Essential to prevent accessing negative indices!
✓ Boundary check prevents array out-of-bounds errors.
Edge Case 4: All elements are the same
Scenario: nums = [2, 2, 2, 2]
Analysis:
- Sum = 8, target = 4
- Need exactly two 2s in subset
dp[0][4]checks:- Skip first 2: can you make 4 from rest? (eventually succeeds)
- Take first 2: can you make 2 from rest? (eventually succeeds)
✓ Validates that recurrence doesn't get confused by duplicates.
Edge Case 5: Two elements
Scenario: nums = [1, 1]
Analysis:
- Total = 2, target = 1
- Can select one element to get sum 1
- Should return true ✓
All three approaches handle this correctly:
- Recursive: base case triggers when s=0 after including one 1
- Memoized: caches the successful path
- Bottom-up: dp[0][1] = true after processing
Edge Case 6: Large array with small target
Scenario: nums = [1, 1, 1, 1, ..., 1] (100 ones)
Analysis:
- Total = 100, target = 50
- Many ways to select 50 ones
- All approaches find a solution, but:
- Brute force: explores exponentially many paths (too slow!)
- Memoized: finds first valid path, caches intermediate results
- Bottom-up: systematically fills table
✓ Demonstrates why DP optimization is essential for larger inputs.
Section 3: Mastery & Visualization
3.1 Practical Examples/Visualization
Worked Example: Trace Through All Three Approaches
Let's trace a small example to see how each approach works:
Input:
nums = [1, 5, 11, 5]
totalSum = 22
target = 11
Approach 1: Recursive Brute Force - Decision Tree (Partial)
canSum(0, 11)
├─ include nums[0]=1
│ └─ canSum(1, 10)
│ ├─ include nums[1]=5
│ │ └─ canSum(2, 5)
│ │ ├─ include nums[2]=11: canSum(3, -6) → false (negative)
│ │ └─ exclude nums[2]=11
│ │ └─ canSum(3, 5)
│ │ ├─ include nums[3]=5: canSum(4, 0) → TRUE! ✓
│ │ └─ exclude nums[3]=5: canSum(4, 5) → false
│ │ → TRUE
│ │ → TRUE (exclude path succeeded)
│ │ → TRUE
│ └─ exclude nums[1]=5
│ └─ canSum(2, 10)
│ ├─ include nums[2]=11: canSum(3, -1) → false
│ └─ exclude nums[2]=11: canSum(3, 10) → ...
│ → ... (eventually finds solution)
│ → TRUE (include path succeeded)
└─ exclude nums[0]=1
└─ canSum(1, 11)
├─ include nums[1]=5
│ └─ canSum(2, 6)
│ ├─ include nums[2]=11: canSum(3, -5) → false
│ └─ exclude nums[2]=11
│ └─ canSum(3, 6) → false (can't make 6 from [5])
│ → false
└─ exclude nums[1]=5
└─ canSum(2, 11)
├─ include nums[2]=11: canSum(3, 0) → TRUE! ✓
└─ ... (don't need to explore)
→ TRUE
→ TRUE (exclude path succeeded)
→ TRUE (either path works!)
Notice: Multiple paths lead to the solution, and we might solve subproblems multiple times (e.g., "can we make X from [5]" appears in different branches).
Approach 2: Top-Down Memoization - Computation Order
Same recursive calls, but with caching:
Call canSum(0, 11)
→ Not in memo, compute
→ Call canSum(1, 10) [include nums[0]=1]
→ Not in memo, compute
→ Call canSum(2, 5) [include nums[1]=5]
→ Not in memo, compute
→ Call canSum(3, 5) [exclude nums[2]=11]
→ Not in memo, compute
→ Call canSum(4, 0) [include nums[3]=5]
→ Base case: s=0 → TRUE
→ memo[3][5] = TRUE
→ memo[2][5] = TRUE
→ memo[1][10] = TRUE
→ memo[0][11] = TRUE ✓
// If we later need canSum(3, 5) again, return memo[3][5] = TRUE immediately!
Each subproblem computed exactly once and cached!
Approach 3: Bottom-Up - DP Table
Build the table systematically:
Initial state (after base case initialization):
dp[i][s] s=0 s=1 s=2 s=3 s=4 s=5 s=6 s=7 s=8 s=9 s=10 s=11
i=4 T F F F F F F F F F F F (base case)
i=3 T ? ? ? ? ? ? ? ? ? ? ?
i=2 T ? ? ? ? ? ? ? ? ? ? ?
i=1 T ? ? ? ? ? ? ? ? ? ? ?
i=0 T ? ? ? ? ? ? ? ? ? ? ?
After processing i=3 (nums[3]=5):
dp[i][s] s=0 s=1 s=2 s=3 s=4 s=5 s=6 s=7 s=8 s=9 s=10 s=11
i=4 T F F F F F F F F F F F
i=3 T F F F F T F F F F F F (can make 5)
i=2 T ? ? ? ? ? ? ? ? ? ? ?
i=1 T ? ? ? ? ? ? ? ? ? ? ?
i=0 T ? ? ? ? ? ? ? ? ? ? ?
After processing i=2 (nums[2]=11):
dp[i][s] s=0 s=1 s=2 s=3 s=4 s=5 s=6 s=7 s=8 s=9 s=10 s=11
i=4 T F F F F F F F F F F F
i=3 T F F F F T F F F F F F
i=2 T F F F F T F F F F F T (can make 11, or 5 from below)
i=1 T ? ? ? ? ? ? ? ? ? ? ?
i=0 T ? ? ? ? ? ? ? ? ? ? ?
After processing i=1 (nums[1]=5):
dp[i][s] s=0 s=1 s=2 s=3 s=4 s=5 s=6 s=7 s=8 s=9 s=10 s=11
i=4 T F F F F F F F F F F F
i=3 T F F F F T F F F F F F
i=2 T F F F F T F F F F F T
i=1 T F F F F T F F F F T T (5, 10=5+5, 11)
i=0 T ? ? ? ? ? ? ? ? ? ? ?
After processing i=0 (nums[0]=1):
dp[i][s] s=0 s=1 s=2 s=3 s=4 s=5 s=6 s=7 s=8 s=9 s=10 s=11
i=4 T F F F F F F F F F F F
i=3 T F F F F T F F F F F F
i=2 T F F F F T F F F F F T
i=1 T F F F F T F F F F T T
i=0 T T F F F T T F F F T T (1, 5, 6=1+5, 10, 11) ✓
Answer: dp[0][11] = true ✓
Critical Questions for Deep Understanding
Question 1: The Index Arithmetic
When you see
dp[i+1][s - nums[i]], can you visualize what's happening?
Answer:
- You've decided to include
nums[i], which "consumes" that value from your target - The remaining problem: can you make
(s - nums[i])using subsequent elements? i+1means moving to the next element- This is how "solve the smaller problem" translates to array indexing
Question 2: Why Boolean OR?
Why do we use
skip || takeinstead of some other operation?
Answer:
- We only need one way to succeed, not all ways
- If either skipping OR taking works, the answer is true
- This is different from knapsack where we use
max()to find the best value - The question type (yes/no vs. optimization) determines the operation!
Question 3: The Zero Target
Why must
dp[i][0]always be true?
Answer:
- Target 0 means "we need sum 0"
- You can always achieve sum 0 by selecting no elements (empty subset)
- This is true regardless of which elements you've considered
- It's a mathematical truth, not a computed result!
Question 4: Recursion vs Iteration
When should you use top-down vs bottom-up for this problem?
Answer:
- Top-down (memoization): Better when you don't need all subproblems (sparse target space)
- Bottom-up (tabulation): More efficient when target is small and you'll compute most subproblems anyway
- Both have same worst-case complexity: O(n × sum)
- Practical consideration: Bottom-up is easier to optimize to O(sum) space
3.2 Framework & Pattern Recognition
The Meta-Process Framework Applied
Notice the pattern emerging across problems:
- ✅ Transform the problem into something more concrete and recognizable
- ✅ Identify mathematical impossibilities that allow early exits
- ✅ Recognize the recursive structure by thinking about decisions and consequences
- ✅ Determine what parameters uniquely identify each subproblem
- ✅ Define base cases by thinking about smallest/terminal instances
- ✅ Start with brute force - Get the logic right first
- ✅ Add memoization - Optimize by caching
- ✅ Convert to bottom-up - Eliminate recursion if needed
- ✅ Optimize space - Reduce dimensions where possible
- ✅ Validate with edge cases to find hidden assumptions
This is the designer's journey - messy, iterative, full of "aha moments" that feel obvious in retrospect but revolutionary when discovered.
Pattern Recognition
This problem is a variant of the Bounded Knapsack Pattern:
Common characteristics:
- Binary choice per element (include or exclude)
- Shared resource constraint (target sum)
- Boolean question (achievable or not?)
- State defined by (position, remaining target)
Related problems:
- 0/1 Knapsack (maximize value)
- Partition Equal Subset Sum ← You are here
- Target Sum (count ways to reach target)
- Subset Sum Problem (can we make target?)
- Minimum Subset Sum Difference
The pattern template:
Recursive structure: dp[i][resource] = skip || (valid(i) && take)
Where:
skip = dp[i+1][resource]take = dp[i+1][resource - cost[i]]valid(i) = cost[i] <= resource
Three implementation choices:
- Brute force recursion - O(2^n)
- Memoized recursion - O(n × resource)
- Bottom-up iteration - O(n × resource)
Evolution of Solution Approaches
Brute Force Recursion
↓ (add caching)
Top-Down Memoization
↓ (eliminate recursion)
Bottom-Up Tabulation
↓ (reduce dimensions)
Space-Optimized DP (1D array)
Each step preserves correctness while improving efficiency!
3.3 Key Takeaways & Summary
Conclusion
The path from confusion to clarity in Partition Equal Subset Sum teaches us:
- Problem transformation - "Two equal subsets" → "One subset = half"
- Early exits - Odd sum check saves massive computation
- Pattern recognition - Structural similarity to 0/1 knapsack
- Brute force first - Get the recursive logic right before optimizing
- Memoization transforms - Caching turns exponential into polynomial
- Bottom-up optimizes - Iteration eliminates recursion overhead
- Question type matters - Yes/no question → boolean DP with OR, not max
- Base cases reveal truth - Zero target is always achievable
- Boundary conditions - Can't include elements exceeding target
The Three Pillars of Subset Sum Mastery
Understand the key insight:
"Partition into two equal subsets"
↓
"Find one subset summing to totalSum / 2"
This is the conceptual breakthrough. Everything else follows from this.
Understand the recurrence relation:
canSum(i, target) =
include: canSum(i+1, target - nums[i]) OR
exclude: canSum(i+1, target)
This is the logical foundation. The OR operation reflects the yes/no nature.
Three-stage evolution:
- Brute force: O(2^n) - explore all combinations
- Memoization: O(n × sum) - cache subproblem results
- Bottom-up: O(n × sum) - eliminate recursion
- Space-optimized: O(sum) - use 1D array
This is the implementation mastery.
This isn't just about solving subset sum. It's about developing the transformation intuition that recognizes how to reframe vague problems into concrete, solvable structures.
The problem transformation is the breakthrough.
Once you see "partition into equal subsets" as "find subset summing to half," the solution becomes a straightforward application of the knapsack pattern.
Master the transformation first (brute force), add memoization (top-down), then optimize to iteration (bottom-up), and complex becomes manageable.
Three solutions. One insight. Infinite applications.
Where Might Confusion Still Lurk?
Reflect on these questions:
- Does the transformation from "two subsets" to "one subset = half" feel obvious or magical?
- Why does memoization work so well for this problem?
- When to use top-down vs bottom-up?
- Can you visualize why
s - nums[i]represents the reduced target? - Does it make sense that we use boolean OR instead of max()?
- Why does order of elements not matter for this problem?
Understanding your confusion points is how you deepen your mastery. The questions you ask reveal the insights you're about to discover.