🐀🤖 Rats preview paths, chimps bluff rivals, AI scores on empathy probes without empathy
What you probably do not know yet
- When a rat pauses at a maze junction, its brain cells fire in a sequence matching the paths it could take, simulating the future before it moves.
- Animals can experience regret. When they make a bad choice, their brain replays the better option they walked away from to update their future strategy.
- Chimps play complex mind games, like pretending not to see hidden food so a rival will not steal it.
- AI models can pass tests designed to measure human social awareness, even though they lack the brain circuits that actually understand other minds.
What you will have after
A clear picture of how the brain evolved from simple prediction to complex social simulation, and why today’s AI might score high on our tests while thinking nothing like us. Max Bennett connects neuroscience, animal behavior, and AI into one fascinating timeline.Seeing is guessing
Your eyes do not send a live video feed to your brain. Instead, your brain guesses what is out there and uses your senses to check if it is wrong. Illusions happen when the brain locks onto the wrong guess. This means perception is actually our first layer of simulation.
Animals simulate the future
Simulation is not just a human trait. When animals pause before a choice, they are running “trial and error” in their imagination. As primate social groups got larger, they had to simulate something much harder than physical mazes: other minds. They had to start tracking who knows what, who is watching, and who is bluffing.
“This cycle of deception and counter-deception in chimps is a beautiful anecdote of an arms race, requiring theory of mind.”
High scores hide alien machinery
Just because an AI passes a human intelligence test does not mean it uses human machinery. A person with specific brain damage might score perfectly on an IQ test but completely lose their social imagination. Similarly, an AI can ace a social reasoning test by recognizing text patterns, without actually having a mind that understands others.
“I look at ChatGPT as an alien brain. It does certain things clearly better than us… The dividing line between AI models and us is our ability to render hypotheses and make interventions, learning the causal structure of the world.”
If we offload our thinking to these alien models, will our own judgment atrophy the way our sense of direction did when we got GPS?
“We’ve offloaded so much cognition, but because humans need to think, there’s social pressure to go to ‘intellectual gyms’ to reason.”
~11m: perception as inference. Skip if you already know how the brain guesses reality, and want to jump straight to the animal stories.
Chapter Guide
If you want to jump to a specific idea, here is the breakdown of the 3-hour talk:
- 0:00 - Introduction: Merging comparative psychology, evolutionary neuroscience, and AI.
- 11:34 - Perception as Inference: Helmholtz, visual illusions, and how the brain guesses reality.
- 19:11 - Understanding vs. Recognition: Generative models and why predicting the future is harder than labeling the present.
- 36:38 - Mice Plan & Regret: David Redish’s research on vicarious trial and error and mental simulation in rats.
- 46:14 - Evolution of Self-Modeling: Agranular vs. granular prefrontal cortex and the atrophy of Layer 4.
- 58:31 - Machiavellian Apes: The social brain hypothesis, Dunbar’s number, and deception in chimps.
- 1:19:35 - AI Alignment & Status Games: Instrumental convergence, power-seeking, and zero-sum social status.
- 1:33:07 - The IQ Paradox: Why damage to the prefrontal cortex ruins social reasoning but leaves traditional IQ intact.
- 1:48:39 - Does GPT Have Theory of Mind?: Why AI pattern recognition is not the same as human mechanistic synergy.
- 2:00:40 - Memes & Cultural Evolution: Dawkins, the “singularity that already happened,” and digital vs. analog brains.
- 2:08:40 - Human Language: Declarative labeling, grammar, and the instinct for joint attention.
- 2:22:01 - Cognitive Offloading: Collective intelligence, Google Maps, and “understanding procrastination.”
- 2:44:24 - Shared Fictions: How memes propagate based on virality and survival benefits, not truth.
- 2:58:01 - AI World Models: Why true world models require hypothesis testing, interventions, and learning causal structures.
- 3:09:50 - The Future of Cognition: Humans as “epistemic hybrids” and the need for intellectual gyms.