Skip to main content

Command Palette

Search for a command to run...

AlgoExpert vs NeetCode: Why Watching Walkthroughs Isn't Enough

AlgoExpert and NeetCode both teach techniques cleanly. The recognition that finds the technique on a fresh prompt has to be built somewhere else.

Published
9 min read
AlgoExpert vs NeetCode: Why Watching Walkthroughs Isn't Enough

You've watched the AlgoExpert video on Binary Tree Maximum Path Sum. Twice, even. The postorder helper makes sense, the global update makes sense. You can re-derive the code on a sheet of paper if you have the problem name in front of you. Then a fresh interview round opens with "given a binary tree, find the largest sum of any non-empty path," and the same pattern doesn't surface in the first eight minutes.

That gap, between watching the technique cleanly and producing it from a fresh prompt, is the part neither AlgoExpert nor NeetCode set out to train.

Key takeaways

  • AlgoExpert's 100 videos and NeetCode's 400+ walkthroughs are honest about what they teach: how a technique works once you've named the technique.

  • The naming step (reading a fresh prompt and deciding the technique that fits) is a separate skill the walkthroughs assume you'll pick up on your own through volume.

  • Volume produces near transfer reliably and far transfer unreliably, per the cognitive science.

  • The fix isn't more polished videos. It's a practice shape that puts the recognition before every problem.

  • Once recognition is built, both platforms' video content becomes more useful. You watch fewer walkthroughs and absorb more from each one.

Worth flagging: I built Codeintuition, a structured learning platform for coding interviews. The post below is about the recognition gap that sits between watching a walkthrough and producing the technique on a fresh prompt, not about the platform.

What both platforms do well, and where they stop

The strongest argument for AlgoExpert is the consistency. Clement records every video himself, derives every solution in the same visual style, and the editing makes the reasoning easy to follow. Once you've adapted to his vocabulary in the first two or three videos, every later walkthrough costs you less attention. The depth is real. If you watch the path-sum video and the maximum sum subarray video back to back, the logic of the postorder-helper-with-side-effect pattern lands more clearly than it would from any text-only resource I've used.

The strongest argument for NeetCode is the breadth and the price. The free YouTube walkthroughs cover more problems than any paid platform's curriculum, and the production has improved year over year. NeetCode 150 is the most widely shared curated list in the prep community. The mapping to LeetCode means your practice environment matches the screening environment a recruiter is going to send you, which AlgoExpert's polished IDE doesn't replicate.

Both platforms move you forward on a specific axis: making sure that, when you've identified the technique, you understand how it works. That axis matters. The platform of nothing-but-text-articles is worse on that axis. The platform of solve-everything-on-LeetCode-without-explanation is worse on that axis.

The axis they don't move is the one before "you've identified the technique." Both platforms label their problems by category and difficulty before the walkthrough starts. The label is what tells you it's a sliding window problem before the video begins. In an interview, the label isn't there. The walking through and the labelling are different skills, and the videos teach the first one without explicitly training the second.

A worked example: Binary Tree Maximum Path Sum

Take the problem. The prompt is one sentence: given a binary tree where each node has an integer value, return the largest sum of any non-empty path. A path may start and end at any node.

What happens if you've watched the video for this exact problem? You can write the code in 8 to 10 minutes. The recursion goes postorder, you compute the local maximum gain on each subtree, you update a global maximum that considers the through-the-current-node case. The pattern is named, the implementation is rehearsed, and the time-on-keyboard is small.

What happens if you haven't watched this exact problem but you've watched many like it? On a fresh prompt without the title, you have to do something the video never asked you to do. You read "any path through any node" and decide that this is a postorder-with-global pattern, not a regular DFS, not a level-order traversal, not DP-on-tree with cached subtree results.

Here's the code, once you've identified the technique:

def maxPathSum(root):
    best = float('-inf')

    def gain(node):
        nonlocal best
        if not node:
            return 0
        left = max(gain(node.left), 0)
        right = max(gain(node.right), 0)
        through = node.val + left + right
        best = max(best, through)
        return node.val + max(left, right)

    gain(root)
    return best

The implementation is short. The decision before the implementation, that this is a postorder helper with a side-effecting global, that the helper returns the better of the two children plus the current node value, that the global tracks the through-current path which doesn't get propagated up, is where the time goes on a fresh prompt. The video gives you the implementation. The walkthrough doesn't drill the decision.

What recognition is, mechanically

Recognition is naming visible features of a problem and matching them to a technique before any implementation begins. Each technique has a small set of trigger features. When the features match, the technique is the candidate to try.

For the postorder-helper-with-side-effect pattern, the triggers are:

  1. Tree input. The problem operates on a tree root, not a graph or a flat array.

  2. Any-path semantics. The answer concerns a path through the tree that isn't constrained to root-to-leaf or leaf-to-leaf. The path can bend through any node.

  3. Local-then-global computation. The value at any node depends on the values at its children, plus a "through this node" computation that isn't passed up.

When all three apply, the pattern is the postorder helper with a global. Binary Tree Maximum Path Sum hits all three. Diameter of a Binary Tree hits all three. Longest Univalue Path hits all three. Different prompts, same triggers, same pattern.

Neither AlgoExpert nor NeetCode walks you through this trigger list before showing you the implementation. The triggers are what you'd have to extract from watching ten videos and noticing the pattern across them. From the engineers I've sat with on prep calls, some extract this on their own. Most don't, and the ones who don't are the ones who report watching every video and still freezing on a fresh prompt.

The cognitive science name for the gap

The relevant research is on the generation effect and transfer of learning. The short version: producing an answer from first principles, even imperfectly, builds stronger memory and stronger generalisation than recognising a familiar solution. Watching builds recognition of solutions you've been shown. Interviews ask for generation, where you produce the technique from a description you haven't seen.

The gap doesn't come from the explanation quality. It comes from the practice shape both platforms assume you'll build on your own. AlgoExpert and NeetCode both run the same shape: see prompt with category label, watch walkthrough, attempt similar problems with category labels visible. The shape that produces transfer to fresh prompts is different: see prompt without label, name visible triggers, attempt a technique based on the triggers, then check.

This isn't a subtle distinction. The difference between watching 200 videos and freezing on a fresh medium and watching 200 videos and recognising the pattern in 90 seconds is which shape was practised, not how much was practised.

A practice protocol for closing the gap

If you've worked through one or both platforms and unfamiliar mediums still freeze you, the work that compounds is a different practice shape. Changing the shape so the recognition gets explicit reps is what closes the gap.

Step What to do Why this matters
1. Pick a technique One technique a week. Postorder-with-global, sliding window, two pointer, monotonic stack, prefix sum, topological sort. The reps need to land on a single technique long enough to extract its trigger features.
2. Write triggers Three or four visible features of a problem that, when they all apply, make this technique the candidate. In your own words, not from a cheat sheet. Writing forces you to articulate what the videos let you absorb without articulation.
3. Read without solving Five problems that use the technique. For each, name the triggers visible in the prompt before reading any constraints in detail. You're rehearsing the recognition in isolation, with no implementation pressure.
4. Solve with the title hidden Three or four problems with the problem name, category tag, and difficulty hidden. The title is the cheat sheet most platforms hand you. Hide it and the recognition becomes the part you can't skip.
5. One pressure session per week Cover the title, set a 25 minute timer, talk through your reasoning out loud, don't open the IDE before stating the technique you're going to try. Recognition under timed conditions is the actual interview test. Practising it in low pressure conditions doesn't transfer.

A technique a week, eight or nine techniques covered, roughly two months of focused work. That replaces the next hundred polished videos where the signal you actually need isn't the one being explained.

Where AlgoExpert and NeetCode fit inside this loop

Both platforms still fit. Once you're inside the loop above, the videos become more useful, not less. You watch one walkthrough on a new technique to absorb the implementation. You write your own trigger list. You attempt your reading-without-solving reps. You attempt your hidden-title reps. The video did what it does best, explain the technique cleanly, and then you did the work the video doesn't include.

What changes is the volume. You don't need fifty videos on sliding window. You need one video plus twenty-five recognition reps. The cost calculation shifts: AlgoExpert's 100-video catalogue is roughly enough for a complete pattern coverage if you're using each video this way, and NeetCode's free tier is enough for the same coverage if you'd rather pay zero. What you spend less of is time-on-walkthroughs. What you spend more of is time on the recognition reps neither platform includes.

What I've watched land for engineers stuck at the medium plateau is the practice shape change, not another platform.

Originally posted, with the platform by platform breakdown and the trigger lists for additional patterns, on my own blog.

What's the technique where the recognition finally locked in for you, and what specific problem broke it open?