Christopher Nolan: Director, AI agent builder
Case study: Memento
What do Tenet, Inception, The Prestige, and Memento have in common? They’re all brilliant. And also the world-building across all of them is so complicated that only a first-principles thinker like Christopher Nolan could drive clarity through it.
Memento's protagonist, Leonard Shelby, suffers from anterograde amnesia—he can't form new memories and resets to the moment of his wife's murder every time he wakes up. Sound familiar?
Leonard Shelby has a limited short-term memory.
Agentic systems have a limited context window.
In both cases, behavior emerges from what survives the reset.
Let’s look at Leonard Shelby’s (henceforth, Lenny’s) life through the lens of an agentic framework and consider where the main failure modes occur. Here is a fun little clip from the movie:
Already we can see that, just like an AI agent, the protagonist has no idea how he got there and relies entirely on supplied context to determine his next steps.
Lenny as an Agent
Lenny’s “programming” looks something like this:
Goal
Find and kill the person who murdered his wife.
Externalized memory
Tattoos, Polaroids, handwritten notes—treated as unquestionable truth.
Operating procedure
Habit, conditioning, survival instincts.
Actions
Driving, interrogating, memory writes, murder.
Now compare that to a modern agentic system:
Goal
Defined via a system prompt.
Externalized memory
Context provided at runtime and often treated as immutable truth.
Operating procedure
A loop of observation → reasoning → action.
Actions
Tool calls, memory writes, user interaction.
Lenny is effectively an agent without a reflection loop, operating entirely on persisted state.
Failure Scenario #1: Persistent Memory Corruption (Teddy)
Teddy is the only person who understands Lenny’s condition. He exploits this by feeding Lenny selective truths and half-lies, using him as a weapon against drug dealers. This unfolds in series of events:
Lenny discovers that Teddy has been using him to catch a drug dealer
Emotions run high, and Lenny writes Teddy off as “untrustworthy” in his notes
Lenny’s memory resets, and the next time he encounters Teddy, he kills him
In agentic terms, Teddy attempts prompt manipulation, which Lenny’s system design unintentionally promoted into persistent state. Without a reflection mechanism and chain of thought loop, a temporary inference becomes permanent policy, causing Teddy to pay the ultimate price.
Failure Scenario #2: Memory Poisoning via Emotional Input (Natalie)
Natalie exploits the same weakness through a different vector. She does this by not outright lying to him, but presenting him with a false scenario and portraying herself as a victim. In doing so, she lets Lenny draw his own conclusions and take action “by his own choice”
Natalie presents herself as a victim and claims her ex-boyfriend is dangerous
Lenny believes he’s helping and writes this down as fact in his notes
Lenny’s memory resets, and he acts on this “truth” - nearly getting shot by the ex-boyfriend in the process
In agentic terms, she exploits Lenny’s memory system by poisoning his long-term state. Using urgent, emotionally charged input, she tricks Lenny into committing false observations. On the next memory reset, these corrupted memories relentlessly steer Lenny’s goal toward Natalie’s objective.
Lessons
There are some observations we can make from Teddy’s and Natalie’s attempts at manipulating Lenny. In both cases, the chief sin is the same: the agent is allowed to write to, and then unequivocally trust, its long-term memory. Combined with no self-reflection or revisions proved to be a catastrophic combination.
Agent design is relatively low stakes today, but won’t always be.
Let’s watch this clip (sorry it’s long - there is only so much I can find on youtube)
This clip shows one last thing that’s wrong with Lenny’s programming.
Goal fixation.
Some early agents exhibited this same flaw. They were so fixated on achieving their objectives that they discarded contradictory evidence. To preserve coherence, the world had to be rewritten instead. In the clip, Lenny is presented with evidence that he killed his own wife.
Rather than reflect or revise his goal, he hallucinates that Teddy is the killer.
No self-reflection. Pure goal fixation—even when the world contradicts it.
This is the core tragedy of Memento. And it’s bad agent design.



Loved this! I think characters and stories are actually a very underrated proxy for developing powerful mental models around how to best harness the power agents afford us, especially as they integrate ever more deeply into our lives and our modalities of communicating with them get ever richer and more dynamic.
The mention of Inception also made me think of the "Those aren't mountains, they're waves" scene where Cooper turns around slowly to realize that he's staring down the barrel an unearthly, hair-raising monster of a wave approaching rapidly—which is a visual that's been stuck in my mind for years now as the best metaphor for the moment we stand in right now with the changes AI will bring to our lives and our world.