Skip to main content

Command Palette

Search for a command to run...

Why LeetCode Practice Stops Working (And What to Train Instead)

The recognition skill that volume practice doesn't build, and how to train it deliberately.

Published
10 min read
Why LeetCode Practice Stops Working (And What to Train Instead)

You've solved a couple hundred coding problems on LeetCode. The easies clear in ten minutes. Some mediums work, the ones that look like things you've seen. Then you open a problem you haven't seen, the timer starts on a real interview, and twenty minutes later the discuss tab tells you the technique you needed was sliding window. You've solved sliding window problems before. The recognition didn't work on this one.

That gap between solving and recognising is the single most common place coding interview prep stalls. The fix isn't another hundred problems. It's training the recognition layer directly.

Key takeaways

  • Practice volume builds memory of specific problems, not the recognition you have to do before you start coding.

  • Recognition has a small set of trigger conditions per technique. Once you know the triggers, unfamiliar problems stop being unfamiliar.

  • The mismatch between practice conditions and interview conditions is a separate gap. The problem name disappears, hints disappear, and trial and error stops being free.

  • A protocol of one technique per week, with explicit trigger writing and a pressure session, replaces hundreds of low value reps.

Where volume practice stops paying off

Coding interview prep on LeetCode tends to follow a curve that flattens around the 150 to 250 problem mark. The first hundred problems teach you syntax, common data structures, and basic implementations. The next hundred extend that into pattern exposure. After that, returns drop sharply.

Why? Because past that point you mostly meet variations of techniques you've already implemented. New problems feel less new. But the variations don't always look obviously like variations on the surface. Two sliding window problems can have completely different framings. One asks for the longest substring with a constraint. Another asks for the smallest subarray summing to at least a target. Same technique, very different problem statements.

Each new problem you solve adds another data point to your memory bank. What it doesn't add is a rule you can apply to a problem you haven't seen yet. The implicit hope behind volume practice is that the rule will emerge from enough data points. For some engineers that happens. For most, it doesn't, and the plateau is where that becomes obvious.

LeetCode's design assumes the rule will emerge. There's no lesson that says "here are the four signals that mean variable sliding window applies." There's the problem, the editorial, the discuss tab, and the next problem. The rule is implicit in the problems, but it's never extracted and named.

Where LeetCode is still the right answer

Worth pausing here. LeetCode being the wrong answer for the recognition gap doesn't make it the wrong answer for everything. It's the strongest problem bank that exists for coding interview prep. The free tier has more problems than most engineers will work through in a year, the discuss forum's top voted replies are often better than the official editorials, and the contest system is a real training mechanism for solving speed under time.

If you've already built solid pattern recognition and you need volume to drill it, LeetCode is the platform. If you're targeting a specific company and want to drill the problems that company actually asks, the company tagging is the most reliable public signal you can get. None of that is a small thing.

The point isn't that LeetCode is broken. The point is that volume practice trains a different skill than recognition. When the bottleneck is recognition, more volume mostly produces more reps without addressing the gap.

What recognition is, mechanically

Recognition is reading a problem statement and matching the visible parts of the problem to the technique that solves it, before you've coded a line. Every technique has a small set of trigger conditions, usually three to five. When all the triggers match, you've identified the technique.

For variable sliding window, the triggers are these:

  1. The input is a contiguous range. A substring, a subarray, a window of consecutive elements.

  2. You're optimising the length of that range. Longest, shortest, smallest valid.

  3. There's a condition you can check incrementally as the window expands or contracts. Distinct character count, sum bounds, set membership.

Once you know the three, the K distinct characters problem is trivially recognisable. Contiguous range, longest substring, condition that can be checked as you go. All three apply. Variable sliding window is the technique.

For prefix sum, the triggers look different:

  1. You're computing some aggregate (sum, count, frequency) over many ranges in the same array.

  2. The query type is repeated and the array doesn't change between queries.

  3. The aggregate is associative or invertible (sums and XORs work, max and min usually don't).

For two pointers, the triggers are different again. Sorted input or one that becomes useful when sorted. Two ends moving toward each other, or one fast and one slow. Constant extra space. Each technique has its own three to five.

Writing the triggers down for the techniques you actually use is the key exercise. Cheat sheet versions exist online. They aren't the same as having written them yourself in your own words from problems you've solved. The version you write yourself is the one that holds up when you read a new problem.

A code template that follows from recognition

Here's variable sliding window in Python. Once recognition has named the technique, the code has very few moving parts.

def variable_window(arr, condition):
    left = 0
    state = init_state()
    best = 0

    for right in range(len(arr)):
        state = expand(state, arr[right])
        while not condition(state):
            state = shrink(state, arr[left])
            left += 1
        best = max(best, right - left + 1)

    return best

For K distinct characters, state is a hash map of character counts inside the current window, condition is len(state) <= K, and shrink decrements counts and removes keys that drop to zero. Three lines of customisation. The skeleton stays the same.

A second worked example: minimum window substring

To pressure test the template, take a harder problem. Minimum window substring asks for the smallest substring of s that contains all characters of t (with multiplicity). The triggers match the same way. Contiguous substring, optimise length (smallest valid), condition checkable as the window changes (do we cover all required characters yet?).

The adaptation: state tracks how many of each required character the current window holds, condition(state) is "every required count is met," and we update best only inside the while loop while the condition holds true (because we're shrinking to find the smallest valid window, not expanding to find the largest). Same skeleton, different shrink logic.

Two problems. Same template. The recognition step did the heavy lifting. That's the payoff you don't get from solving each problem as a standalone instance.

The science: near and far transfer

There's a learning research term for what's happening here. Wikipedia's article on transfer of learning covers it in depth. Two flavors of transfer matter for interview prep.

Near transfer is the ability to solve problems that look like ones you've already solved. The visible features of the problem are similar. The mapping from problem to technique is short. Volume practice builds near transfer reliably.

Far transfer is the ability to solve problems that look different from anything you've seen but whose underlying structure matches a technique you know. The mapping requires explicit trigger conditions, not memory of how problems looked. Volume practice builds far transfer unreliably, because the rule isn't explicit anywhere.

Coding interviews test far transfer. Interviewers deliberately reframe problems so the wrapper won't match anything you've cached. The reframing is often light (rename the variables, change the data type, swap one constraint), but it's enough to break a recognition that was built on memory of how problems looked rather than triggers. That's the mechanism behind "I've solved 500 problems and still freeze on Google."

Practice conditions that match the test

Recognition is half the gap. The conditions you practise under are the other half.

The default LeetCode practice setup gives you the problem name (often a strong hint), unlimited code executions with no penalty, and a discuss tab one click away. These are great for the early learning phase. They are not the conditions of an interview. In a real interview the problem name disappears, your wrong attempts cost time and credibility, and you have to talk through your reasoning while solving.

Dimension Practice (default) Interview
Problem name Visible, often hints at category Not given
Code executions Unlimited, no penalty Limited, every failure costs
Hints / discuss One click away Not available
Time limit Set by you, often skipped 20 to 45 minutes hard
Reasoning Internal Out loud

Replicating the right column on your own takes one friend and one kitchen timer. The friend covers the title and reads the constraints aloud. You set the timer to twenty or thirty minutes depending on the difficulty band. You explain your reasoning before you open the IDE, and you state the technique you intend to use before writing any code. If you can't state it within five minutes, the problem rewinds to the recognition queue. That feedback loop, run twice a week, will close the pressure gap inside a month.

A protocol you can run starting this week

If grinding mediums has plateaued, swap the loop. The protocol below is the one I'd run if I were back in interview prep mode.

  1. Pick one technique a week. Variable sliding window. Two pointers. Prefix sum. Monotonic stack. Whichever you find yourself missing.

  2. Write down the trigger conditions for that technique in your own words. Three to five features of the problem that make this the right tool when they all apply. Don't copy a cheat sheet.

  3. Read five problems that use this technique without solving them. For each, name the triggers you can see in the problem statement before reading the constraints in detail.

  4. Solve three to five problems on this technique with the title and tag hidden. Force the recognition step.

  5. Run one timed pressure session a week. Friend covers the title, you talk reasoning before the IDE opens, no shortcuts. If you can't name the technique inside five minutes, the problem returns to the recognition queue.

Eight to ten patterns covered this way takes roughly ten weeks of focused work. That replaces several hundred mediums where the signal gets buried under details that don't matter.

(Originally posted on my own blog with the trigger conditions written out for the other patterns and a worked example for prefix sum that uses the same templating idea.)

Which technique was the one whose triggers finally clicked for you, and what was the specific problem that broke the recognition open?


Disclosure: I built Codeintuition, a structured learning platform for coding interviews. The post above is about the recognition skill itself, not the platform.