Skip to main content

Command Palette

Search for a command to run...

Am I Ready for FAANG? A 2 Hour Test That Actually Tells You

A two hour diagnostic that replaces 'I don't know if I'm ready' with a binary answer.

Published
11 min read
Am I Ready for FAANG? A 2 Hour Test That Actually Tells You

You've solved 250 LeetCode mediums. The familiar ones land in twelve minutes. The next unseen one freezes you for forty, and the honest answer to "am I ready for FAANG yet?" is the same honest answer it was three months ago: you don't know.

The reason isn't that you're bad at evaluating yourself. It's that the question you're asking has no measurable answer. "Do I feel ready" is a confidence read on a performance threshold. Confidence wobbles after every session. The threshold doesn't.

This post is about the diagnostic that replaces the feelings question. Two hours, three problems, three different pattern families, real interview conditions. The output is a binary answer plus a diagnosis you can act on.

Key takeaways

  • Self assessed readiness drifts every session. The threshold the interview measures doesn't.

  • Solving a problem you've seen and constructing a solution to one you haven't are different skills. The interview tests the second one.

  • The fix is a measurable test: an unseen medium in a family you've studied, twenty minute timer, problem name aliased, hard cap on execution attempts, repeated across three families.

  • Three passes is a strong readiness signal. One fail tells you precisely where the gap is.

  • The signal is recognition holding when the title is hidden, not the count of problems solved.

Worth flagging: I built Codeintuition, a structured learning platform for coding interviews. This post is about the diagnostic itself, not the product behind it.

Why the count breaks down as a readiness signal

The first reason is the difference between near transfer and far transfer, which is well documented in the transfer of learning literature. Near transfer is what repeated practice on a pattern family buys you: when a problem looks like the ones you've drilled, your recognition fires. Far transfer is what carries over to genuinely new problems: when the surface looks different but the underlying constraint shape is the same, you still spot the family.

The 300 problem grind builds near transfer reliably. It builds far transfer only as a side effect. FAANG loops select for far transfer specifically, because the interview gives you a problem you haven't seen.

The second reason is calibration drift. A morning of clean tree problems pushes your confidence up. An afternoon of frozen graph problems pulls it back down. Neither sample reflects your stable performance across the families an interview can draw from. The "do I feel ready" question is asking your most recent session, not your average.

The third reason is what the metric is measuring. The count says how many problems you've finished. The interview measures how reliably you can spot a family from constraints when the title is hidden, then construct an approach. Those don't move together past about 100 problems. After that, more volume on familiar shapes adds count without moving the readiness threshold.

That's the loop you've been stuck in. Prep, feel uncertain, prep more, still feel uncertain. The method is wrong, not the effort.

What the diagnostic looks like

The diagnostic is small and strict. It takes about two hours end to end. The shape:

Step What you do
1 Pick a pattern family you've studied (sliding window, tree traversal, graph BFS, DP subsequence, etc.)
2 Find a medium difficulty problem in that family that you have never solved, hinted, or read about
3 Cover or alias the problem name so the title doesn't reveal the family
4 Set a twenty minute timer, capping yourself at two failed code execution attempts
5 Identify the family, build the solution, trace it on a small input before running anything
6 Repeat the whole thing on two more pattern families you've studied but haven't over practised

A pass is: solve within the timer, fewer than two failed runs, no hints, no peeking. Anything else is data.

Three passes across three families is a strong readiness signal. Recognition is firing on shapes you haven't seen, the construction is following the family's invariant rather than recall, and the conditions are close enough to a real round that the result generalises.

Two passes and one failure tells you exactly which family to work on. You don't need more random volume. You need depth in one specific place.

All three failed is the most useful diagnostic of the three. It tells you the prep method has been building near transfer without far transfer. The fix is method, not effort.

A worked diagnosis on tree traversal

A concrete picture of the diagnostic firing on a single family makes the protocol read less abstractly. Pick the family carefully, because the test only works on a family you've genuinely studied. Tree traversal is a common one. The setup looks like this.

You open the problem. The title is covered. The constraints describe a binary tree, an objective that mentions the longest path or the largest sum along a root to leaf route, and an output that's a single number. Two minutes in, you've identified the family. Not because the title said "tree" but because the constraints fit the postorder helper plus global tracker shape that every variant of the family uses.

The construction follows the invariant, not memory of a specific problem. The helper returns the longest path that ends at the current node. The global tracker holds the answer, which is the longest path that passes through any node. You write the skeleton:

def solve(root):
    answer = [0]
    def helper(node):
        if not node:
            return 0
        left = helper(node.left)
        right = helper(node.right)
        answer[0] = max(answer[0], left + right)
        return 1 + max(left, right)
    helper(root)
    return answer[0]

You fill in the comparison logic for the specific objective, trace it on a five node tree by hand, verify the post order order of operations is correct, and run. It passes on the first try. Six minutes still on the timer.

The check that matters isn't that you got it right. It's the how. Did you identify the family from the constraints inside three minutes, write the skeleton from the invariant rather than retrieval, and verify by tracing? If yes on all three, that's pattern recognition holding under pressure. If you needed twelve minutes to identify, or wrote the skeleton from a remembered problem, or skipped the trace, that's the part that won't survive an interview.

A second pass on a different family

Now change family. Pick a variable sliding window problem you haven't seen. The constraints look like: a contiguous range over an array or string, a flexible boundary that grows and shrinks, an objective that asks for the longest, shortest, or maximum range satisfying some condition.

The recognition again happens early, before any code. The constraints match the variable sliding window's three triggers, you can name the invariant the window has to maintain, and you write the same expand and then contract skeleton you'd write for any problem in the family.

def variable_window(arr, valid):
    left = 0
    best = 0
    state = init_state()
    for right in range(len(arr)):
        state = include(state, arr[right])
        while not valid(state):
            state = exclude(state, arr[left])
            left += 1
        best = max(best, right - left + 1)
    return best

You specialise init_state, include, exclude, and valid for the specific problem. The skeleton stays the same. That is the marker of a pattern that has actually generalised in your head. You write the skeleton from the family's invariant first, then specialise it. You don't reach for memory of "the problem this is similar to."

If you pass this one too, you've got two of three. One more, on a third family you haven't over practised, decides whether you're ready.

The four signals most engineers reach for

Before running the diagnostic, it helps to name the signals you've probably been using instead, and why each one lies.

  • Total problem count. Says nothing about how the problems were solved. Someone at 120 problems with genuine pattern depth out performs someone at 400 who relied on hints for half of them. The count is a hygiene metric, not a readiness one.

  • Topic completion. You finished sliding window two months ago and haven't touched it since. Completion isn't retention. The performance you had on week three doesn't survive without revisits, and the diagnostic measures present performance.

  • Speed on familiar problems. Two Sum in two minutes feels like fluency. It's actually retrieval of a stored solution. The moment a novel problem looks similar but has different constraints, the speed evaporates and you're back to staring at the screen.

  • Peer comparison. Your friend got into Google in six months. That ignores their background, the families they focused on, how they practised, and what level they interviewed for. Your readiness has nothing to do with anyone else's timeline.

The diagnostic bypasses all four. It doesn't read the count, the completion checkmarks, the recall speed, or anyone else's path. It reads one thing: can you construct a solution to a novel problem, under pressure, across families.

When you fail the test

Most engineers don't pass three for three on the first attempt. That's expected. A clean three for three on the first try usually means the families were too comfortable, and the diagnostic surfaces nothing.

  • One family failed. You know the pattern at a surface level but haven't internalised the identification triggers or the construction skeleton. Go back to the foundational material for that family. Don't grind more random problems in it. Study what makes the pattern applicable, the constraint combinations that point to it, the invariant every problem in the family shares. Once you can articulate that without notes, retest with a different unseen problem.

  • Two families failed. You probably have one strong area where you've over practised and shallow gaps everywhere else. Common for engineers who spent months on arrays or trees because the work felt productive. Broaden the coverage. Spend focused time on the families where the understanding is thin.

  • All three failed. The preparation has been building near transfer without building far transfer. Method gap, not talent gap. Shift from solving high volumes to studying fewer problems more deeply. Focus on identification and constraint analysis rather than just reaching a correct solution.

One catch worth naming. Don't retake the diagnostic with the same problems. A retest on a problem you've already seen, even if you failed it, measures recall instead of reasoning. Pick a different unseen problem in the same family for the retest.

Setting up the conditions yourself

The hardest part of the diagnostic is replicating real interview conditions. Solving at your desk, documentation a tab away, with the timer optional, doesn't replicate a forty five minute FAANG round.

What you actually need: a source of unseen mediums in the families you've studied, a way to hide the problem name (a friend covering it, a browser extension that aliases the title, a curated list of problems where you can't see the title until the timer starts), a kitchen timer set to twenty minutes, and the discipline to stop after two failed runs. The conditions matter. The diagnostic fails the moment you peek at hints or let the timer slide.

If the family coverage is the bottleneck rather than the conditions, the data structures learning path on Codeintuition is one place to work on a couple of families before retesting. The diagnostic itself doesn't depend on it. The point is the test, not the source of the problems.

I keep noticing the same thing across engineers running this diagnostic for the first time: the result is more useful than the score. A "two passed, one failed on graph" is more actionable than a "feeling 70 percent ready." The first tells you what to study tomorrow. The second tells you nothing.

Originally posted on my blog, with per family signal breakdowns and diagnosis trees for the most common failure modes.

Which family was the one that surprised you with a fail when you ran a version of this on yourself?