Skip to main content

Command Palette

Search for a command to run...

Amazon Bar Raiser: The 10 Minutes That Decide the Coding Round

How Amazon bar raiser coding interviews work, what the follow up really tests, and how to prepare for constraint changes.

Published
11 min read
Amazon Bar Raiser: The 10 Minutes That Decide the Coding Round

You've been preparing for an Amazon coding loop the same way you'd prepare for Google or Meta. You've gone deep on two or three pattern families, you've timed yourself on roughly a hundred problems, you can recall the standard solution for the company tagged ones in under five minutes.

The first thirty minutes of the round go fine.

Then the interviewer changes a constraint. Sorted input becomes unsorted, the K in your sliding window starts varying, the cache needs a TTL. The solution you just wrote stops working.

The next ten minutes decide the round.

That ten minute window is what the Bar Raiser is grading. It's the part of Amazon's loop that diverges most from how Google and Meta evaluate, and it's the part most prep optimises away.

Key takeaways

  • The Bar Raiser is an interviewer outside the hiring team with veto power, evaluating against Amazon's company wide bar.

  • Amazon tests around eleven pattern families (against Google's seven or eight and Meta's six or seven), so going deep on two or three has a lower hit rate here.

  • The round's grade lives in the follow up after your first working solution, not in the first solution itself.

  • Adapting a solution when a constraint shifts requires understanding which invariant the original maintained.

  • Interleaved practice and explicit constraint flip drills build the adaptation skill the follow up tests.

The job a Bar Raiser is hired to do

Amazon assigns one interviewer per loop from outside the hiring team. They're not part of the team you'd join. They have no incentive to lower the bar to fill a seat. Their evaluation is independent, and they have explicit veto power.

A hiring manager wanting to extend an offer can be overruled by a Bar Raiser objection, and that overrule sticks.

You won't know which of your interviewers is the Bar Raiser. Amazon keeps that ambiguous on purpose. You should treat every round as potentially the Bar Raiser round, which is exactly the prep posture Amazon is trying to enforce.

On the surface, the coding round looks identical to any other round.

Same problem types, same forty five minute window, same whiteboard or shared editor.

The visible difference is what happens in the last ten to fifteen minutes.

A standard interviewer might call the round once you've produced a working answer.

A Bar Raiser starts the real evaluation there.

They'll change a constraint, ask you to optimise without scrapping the structure, ask why a specific data structure was the right choice, or walk you through a failure case you didn't test.

Key insight: Your initial solution is the entry to the conversation, not the conversation itself. The grade comes from how you handle the follow up.

The pattern breadth Amazon actually tests

Google's interview pool leans on predicate search, counting, and graph traversal. Meta concentrates on sliding window, hash table problems, and design.

These narrower centres of gravity mean candidates who go deep on two or three families cover a meaningful share of what they'll face.

Amazon doesn't have a tight centre.

Across problems tagged to Amazon, you'll see counting, fixed sliding window, variable sliding window, prefix sum, LRU Cache style design, randomized set design, plain binary search, 2D binary search, staircase search, queue design, and backtracking.

Roughly eleven distinct pattern families is close to double what Google's narrower set demands.

The implication for prep is direct: the depth strategy that works at Google has a worse hit rate at Amazon.

If you go deep on three families and the round lands in any of the other eight, you don't have the recovery path of a wider but shallower foundation.

The Bar Raiser amplifies this because they're not bound to the team's domain.

They can pull a problem from any family.

The dimension that changes Amazon prep is breadth.

Going deep still matters, but breadth carries more weight here than at narrower companies.

A worked example: the constraint flip on variable sliding window

Take a problem Amazon has used variants of: longest substring with at most K distinct characters.

The classic variable sliding window solution looks like this:

from collections import defaultdict

def longest_substring_k_distinct(s: str, k: int) -> int:
    counts = defaultdict(int)
    left = 0
    best = 0

    for right in range(len(s)):
        counts[s[right]] += 1

        while len(counts) > k:
            counts[s[left]] -= 1
            if counts[s[left]] == 0:
                del counts[s[left]]
            left += 1

        best = max(best, right - left + 1)

    return best

The invariant the window maintains:

s[left:right+1] contains at most k distinct characters at all times.

Expansion adds a character. If the invariant breaks, contraction restores it.

Standard variable sliding window.

Now the Bar Raiser's follow up:

The constraint isn't k distinct characters anymore.

The substring is valid only if every character appears at least twice.

The candidate who memorised the template starts redesigning.

The candidate who built the solution from the invariant asks one question:

Which property is the window contraction enforcing, and how does that property change?

The original window maintained:

Distinct count <= k

Contraction shrank left until the count returned below k.

The new constraint becomes:

Every character frequency >= 2

The contraction condition changes.

The structure doesn't.

The expansion loop, the best length tracker, and the frequency map all stay.

Five lines of the original change.

The candidate who reaches for the invariant gets there in three minutes.

The candidate who scraps and restarts spends ten minutes recovering ground they already had.

The Bar Raiser writes down the same observation either way:

Did this candidate construct from understanding, or recall from memory?

A second example, briefly

The same pattern shows up on LRU Cache, one of Amazon's most cross-company tested problems.

The standard solution composes:

  • a hash map for O(1) lookup

  • a doubly linked list for O(1) reordering and eviction

Now the follow up:

Each entry has a TTL. Expired reads return -1. Complexity must remain O(1).

The candidate who understood why the doubly linked list exists (maintaining a recency ordering invariant for O(1) removal) modifies the eviction predicate to check expiry first, then falls back to recency.

The structure survives.

The candidate who memorised the template starts redesigning and runs out of time.

The unifying pattern across both follow ups is the same:

The constraint change attacks one specific assumption in the original solution.

Identifying which assumption broke is the work the round is grading.

What the Bar Raiser is actually grading

Three threads separate strong hires from no hires in this format.

1. Reasoning under loose constraints

Problem statements are deliberately under specified.

Inputs may or may not be sorted, may or may not contain duplicates, sizes may include zero.

The candidate who asks clarifying questions and writes down assumptions before coding signals structured thinking.

The candidate who codes immediately reads as someone operating from pattern memory.

2. Solution adaptation across the follow up

The follow up doesn't change the problem.

It changes one constraint.

The candidate who isolates the dependency between the changed constraint and one specific part of their solution adapts in minutes.

The candidate who scraps and starts over reveals that the first solution was lookup, not construction.

3. Articulation of why

Bar Raisers listen for the layer beyond:

“I'm using a hash map.”

They want to hear:

  • what invariant the hash map maintains

  • what breaks under an array

  • what tradeoff you accepted

Engineers who can explain the load-bearing reasoning are the ones who raise the team's average performance, which is literally the evaluation criterion.

How the round usually goes wrong

Most candidates don't fail Bar Raiser rounds because they can't code.

They fail because prep optimised for the wrong half of the round.

1. Solving silently

You finish at minute fifteen and present a complete answer.

The Bar Raiser saw output but no reasoning.

Narrate structural choices as you work:

  • why a hash map and not a sorted array

  • why an O(n) first pass

  • what edge case you'll return to

The round grades observable reasoning.

2. Hunting for the optimal solution before the working one

You spend twenty five minutes thinking before writing code.

The follow up arrives at minute thirty five and there's no time left.

Get a correct O(n²) or O(n log n) solution working first.

Then optimise when asked.

The follow up is part of the grade.

Running out of time on it is a soft fail you didn't need to take.

3. Restarting on the follow up

When the constraint changes, the instinct is to discard your solution and rebuild.

That's wrong most of the time.

Identify which part of the solution depended on the changed assumption.

Modify only that part.

4. Skipping edge cases

Empty input.

Single element.

Duplicates.

Overflow boundaries.

Walk through two edge cases unprompted.

It signals thoroughness and often pre-empts the follow up that would've targeted the case you skipped.

Practising for the follow up specifically

Two prep changes address the breadth requirement and adaptation requirement together.

1. Interleaved practice over blocked practice

Blocked practice spends a week on sliding window, then a week on DP, then a week on graphs.

Interleaved practice rotates between families in the same session.

The cognitive science on interleaving consistently shows that mixing problem types improves your ability to identify which pattern applies to a new problem.

The extra difficulty during practice is the same identification work the Bar Raiser's follow up requires.

That's why interleaving transfers and blocking often doesn't.

2. Explicit constraint flip drills

After every problem you solve, ask:

What single constraint change would invalidate this solution most?

Then implement the modified version.

The constraint flip is the rep that builds adaptation.

And adaptation is what the Bar Raiser scores.

Original problem Constraint flip rep
Longest substring with K distinct characters Window validity defined by a function of contents (every char appears >= 2 times)
LRU Cache Each entry has a TTL, reads on expired entries return -1
Two Sum on sorted array Array no longer sorted, same O(n) target
Number of Islands Grid too large for memory, solve in row chunks
Course Schedule Some prerequisites are soft and can be skipped at a cost

You don't need a platform for either habit.

A notebook, a list of problems, and a deliberate second pass are enough.

Most people skip the second pass because there's no immediate signal it's working.

The signal arrives in the interview when the constraint shifts and you reach for the invariant instead of the template.

If a structured walkthrough of Amazon's most tested patterns and trigger checklists helps, the hash table problem patterns lesson on Codeintuition is the closest single resource I'd point to.

It teaches the trigger checklists before the problem practice begins.

A few logistical things worth knowing

The onsite loop is usually four or five rounds in a single day or across consecutive virtual sessions.

  • Two or three coding rounds

  • One system design round (senior candidates)

  • One behavioural round anchored on Leadership Principles

The Bar Raiser participates in at least one of these.

You won't know which.

Coding rounds run about forty five minutes:

  • 5–10 minutes introductions and problem setup

  • 25–30 minutes initial solution

  • 10–15 minutes follow up window

If your first solution takes thirty five minutes because you chased optimality too early, you've already conceded the follow up.

Rounds are evaluated independently.

A weak round one doesn't doom you.

Reset between rounds instead of replaying mistakes.

The debrief happens without you.

All interviewers, including the Bar Raiser, attend.

If there's disagreement, the Bar Raiser's veto stands.

What changes after this

The Bar Raiser round is a higher bar applied to the same format, with broader pattern coverage and a follow up phase that carries most of the round's weight.

Two things move the needle:

  1. Pattern breadth across families (built by interleaved practice, not depth alone)

  2. Adaptation when a constraint shifts (built by explicit constraint flip drills)

Neither requires a course.

Both require habits.

Originally posted on my blog, with a pattern-by-pattern breakdown of Amazon’s most tested families and a deeper walkthrough of Bar Raiser follow-up mechanics.

What's a constraint flip from your own prep that exposed a solution you thought you understood?