
The Question
For decades, neuroscientists have used brain imaging to study how the brain represents experience, how a memory looks across thousands of points in the brain, how a face is recognized, how attention rests when the mind is at rest. In the last few years, AI researchers have started asking strikingly similar questions about language models: what is happening inside these systems, and how does it compare to the brain? The two fields are converging on the same question from opposite ends. Taylor's students sit at that convergence.
What Students Build
Over five weeks, students build working versions of three classic brain imaging analyses using public datasets and Python. They start by training a model to figure out what a person was looking at based purely on patterns of brain activity, reproducing one of the foundational results in the field. They move to studying how the brain organizes itself even when nothing is being asked of it. They finish with the analysis that turns out to be the most direct conceptual bridge to AI alignment research: comparing how similar things look across brain regions, the same way researchers now compare how similar things look across layers of a language model.
The capstone, optional but worth aiming for, is a direct comparison between how a brain represents categories and how a small language model represents the same concepts. Students leave with a working notebook, a written report, and the ability to read papers in this field and know what they're looking at.
The Mentors
Taylor Beck came to neuroscience through Faulkner, Virginia Woolf, and TS Eliot, drawn by the question of what it actually feels like to have a mind. That question carried him from Princeton through research labs in Kyoto and Washington, studying memory, decoding dreams, and mapping how the brain represents experience. He is productively skeptical of what AI claims to be. This summer his students will work with open source brain imaging datasets, building the same analyses that have shaped modern neuroscience and using them to ask a question Taylor cares about: how similar is what's happening inside a brain to what's happening inside a language model?
Who This Is For
Python proficiency is required, comfort with libraries, plotting, and basic data analysis. Students should have a working understanding of how a model can be trained to make predictions and tested for accuracy. No prior neuroscience background is required. Taylor will teach what students need. The right student here gets genuinely curious about what the data shows, notices an unexpected pattern and wants to understand it before moving on. Students looking for clean answers will struggle. This is exploratory work, and the most interesting findings often look like noise at first.
Logistics
Five weeks. Mondays, Wednesdays, Fridays, 11:00 AM to 12:15 PM ET. Friday sessions extend to 1:00 PM for Demo Day. Cohorts of 3 to 4 students per mentor. $4,500. Apply by May 11, 2026.
Beyond the live sessions, students work on their own, and they are not alone when they do. The lab is supported by a 24/7 Slack channel and a team of scholars and practitioners at the Academy. Students also work alongside SeqHub's AI co-teacher, which helps them think through problems on off days without doing the work for them. Plan for 10 to 12 hours per week, with 4.5 hours in live sessions and the rest on independent work.