Amazon vs Google vs Meta: The DSA Patterns Each Company Tests
Most candidates overtrain familiar patterns and miss the ones that quietly show up everywhere. Here’s what Amazon, Google, Meta, Microsoft, and Apple actually test.

You've drilled sliding window. You've drilled binary search. You've solved Two Sum, Three Sum, and Group Anagrams enough times to write them blindfolded.
The next medium that lands in front of you is a subarray sum problem, and your first instinct is two nested loops.
That's the moment Prefix Sum was supposed to be in your toolkit and isn't.
Here are the key takeaways from this post:
Prefix Sum appears at 8 major tech companies in the data, yet most prep plans treat it as a footnote.
Seven patterns are universal across 6+ FAANG-tier companies and form the baseline before any specialisation.
Amazon tests the broadest pattern footprint (11+ families). Google tests narrower but emphasises predicate search.
Meta wants hash table depth and design fluency. Microsoft is distinct on 2D and staircase search.
The per-company DSA map should drive how you allocate prep hours, not which problems are “trending.”
The cross-company tag data in this post comes from the platform's tagged problem set, which is also why per-company prep allocation is something I keep returning to.
The seven patterns nobody can skip
Across 450+ handpicked interview problems with company tags attached, seven patterns appear at six or more major companies.
These are the universal layer.
Skipping any of them is a structural hole in your prep, regardless of where you're interviewing.
If you only have time to master seven patterns, make it these.
| Pattern | Companies tagged | Where it shows up |
|---|---|---|
LRU Cache |
19 | Every FAANG plus DoorDash, Oracle, Zoom, PayPal, TikTok, eBay, LinkedIn, Zillow, Intuit, Cloudera |
| Counting (hash table) | 9 | Every FAANG plus Uber, Spotify, Yahoo |
| Backtracking | 8 | Generate Parentheses, N-Queens, Sudoku |
| Prefix Sum | 8 | Range queries, equilibrium points, subarray sums |
| Binary Search | 7 | Classic sorted array variants, rotated arrays |
| Fixed Sliding Window | 7 | Frequency tracking inside a fixed window |
| Variable Sliding Window | 6 | Contracting on a condition rather than a counter |
Two of those seven deserve a closer look because they're consistently miscalibrated in prep plans.
LRU Cache: the universal stress test
LRU Cache shows up everywhere because it tests three skills in one problem:
hash table mechanics
doubly linked list manipulation
design composition
No alternative problem covers the same combination at the same difficulty.
If you can't write it from scratch under time pressure, including the helper methods, that's one of the highest-priority gaps to close before any FAANG round.
Prefix Sum: the underrated unlock
Prefix Sum is the underdog.
Eight companies test it, but it has no flagship problem the way LRU Cache or Two Sum does.
Most plans treat it like a minor utility pattern.
That's the mismatch worth correcting.
Why Prefix Sum punches above its weight
Prefix Sum's mechanic is small.
You walk through an array once, building a running total, and store it in a parallel array.
From then on, the sum of any subrange is one subtraction.
def build_prefix_sum(arr):
prefix = [0] * (len(arr) + 1)
for i, x in enumerate(arr):
prefix[i + 1] = prefix[i] + x
return prefix
def range_sum(prefix, lo, hi):
return prefix[hi + 1] - prefix[lo]
That's the whole pattern.
The tactical move is recognising when it applies.
Prefix Sum triggers
The problem asks about subarray sums or range queries on an array.
Multiple subranges need the same answer (so amortising the build cost pays off).
The values are immutable for the duration of the queries.
Where it actually pays:
Subarray Sum Equals
K. Combine Prefix Sum with a hash map of “sum seen so far.” Look upprefix - Kto find matching subarrays inO(n)instead ofO(n²).Equilibrium Index. Find where left sum equals right sum.
Count Subarrays Divisible by
K. Same Prefix Sum + hash map trick, slightly different aggregate.
What makes this category get skipped is its surface plainness.
Sliding window has a satisfying contraction mechanism.
Binary search has clean halving.
Prefix Sum is just a running total.
The drilling instinct skips it for shinier patterns.
The interview frequency does not.
Where the company maps actually diverge
The universals get you to the table.
The company-specific patterns determine whether you're prepared for the round you're walking into.
Amazon: breadth beats depth
Amazon covers:
Counting
Fixed and Variable Sliding Window
Prefix Sum
LRU CacheRandomized Set
Binary Search
2D Binary Search
Staircase Search
Maximum Predicate Search
Queue Design
Backtracking
That's 11+ pattern families.
The “study three patterns deeply” strategy has a lower hit rate here than anywhere else.
The Bar Raiser round amplifies this.
A Bar Raiser is an interviewer from outside the hiring team and can sample from any of those families.
Breadth matters more at Amazon than at narrower companies.
Google: predicate search matters
Google shares the universals but adds a strong emphasis on predicate search.
This is binary search over the answer space, not the input.
You define a feasibility predicate and binary search possible answers.
Problems like:
minimum shipping capacity
punctual arrival speed
trip completion rate
all share the same shape.
If your model of binary search is only:
“find a value in a sorted array”
predicate search will freeze you.
Meta: hash tables and design depth
Meta concentrates on:
Sliding Window
Prefix Sum
Counting
LRU CacheRandomized Set
Maximum Predicate Search
Meta places less weight on search variants than Google.
Instead, it emphasises:
hash table fluency
design implementation
fast recognition on familiar patterns
Microsoft: 2D search shows up more than expected
Microsoft stands out for:
2D Binary Search
Staircase Search
Most prep plans underweight these because they don't appear heavily at Google or Meta.
If Microsoft is your target, increase time on matrix search and reduce time on lower-frequency categories.
Apple: fundamentals, deeply tested
Apple leans heavily toward fundamentals:
Counting
Binary Search
Prefix Sum
Backtracking
The advanced design problems Amazon loves appear less often.
Apple's signal is depth in fundamentals over breadth in obscure patterns.
Subarray Sum Equals K: Prefix Sum + hash map
The most testable form of Prefix Sum isn't the basic version.
It's the composition:
Prefix Sum + hash map of seen sums
The setup:
Given an integer array and a target
K, count contiguous subarrays summing toK.
def subarray_sum_equals_k(nums, k):
count = 0
running = 0
seen = {0: 1} # base case
for x in nums:
running += x
count += seen.get(running - k, 0)
seen[running] = seen.get(running, 0) + 1
return count
The key line:
count += seen.get(running - k, 0)
If an earlier prefix sum equals running - k, then the subarray between that point and now sums to exactly k.
The hash map converts:
find previous prefix →
O(n)
into:
find previous prefix →
O(1)
Total complexity:
Time:
O(n)Space:
O(n)
This pattern generalises surprisingly far:
subarrays divisible by
Kequal
0s and1sbalancing constraints over contiguous regions
Same shape.
Different aggregate.
Allocating prep time once you know your target
Three rules fall out of the data.
1. Cover the seven universals first
Regardless of target company.
Skip none.
2. Specialise second
Google: Predicate search
Amazon: Breadth + Bar Raiser adaptation
Meta: Hash table depth + design
Microsoft: 2D and staircase search
Apple: Fundamentals depth
3. Cut false-priority work
Prep time is finite.
The data tells you where over-preparing hides.
Targeting Google but not Amazon?
Queue Design can wait.
Targeting Meta but not Microsoft?
Staircase Search can wait.
The pattern almost everyone undertrains, again, is Prefix Sum.
Eight companies tag it.
Almost no prep plan emphasises it.
If you're four weeks out and want one underrated unlock, that's one of the highest hour-for-hour returns in the dataset.
Originally posted on my blog, with per-company FAQs, additional code walkthroughs, and the full tagged pattern breakdown.
If you've sat for FAANG rounds at two or more of these companies, which pattern came up that the company's reputation didn't prepare you for?




