The best way to read the book that's shaking the AI world
NYT bestseller by Eliezer Yudkowsky & Nate Soares

US edition

UK edition
If Anyone Builds It, Everyone Dies is the most talked-about book on AI existential risk. Published in September 2025, it hit the New York Times bestseller list within weeks and sparked intense debate across tech, policy, and mainstream media.
Yudkowsky and Soares — co-leaders of the Machine Intelligence Research Institute — walk through the theory and evidence for why building superhuman AI could end everything. They present a concrete extinction scenario, then lay out what it would take for humanity to survive.
Whether you end up agreeing with their conclusions or not, these are arguments you need to engage with seriously. This is the defining debate of our time, and this book is the sharpest articulation of one side of it.
To take this course, you'll need your own copy of If Anyone Builds It, Everyone Dies.
Instead of just buying the book and reading it alone, you can get much more out of it by reading and discussing it with others.
The arguments in this book are dense and layered. It's easy to miss nuance reading solo — or to get stuck on a point that a quick conversation could clarify. Forming your own opinion is easier when you can hear how others react to the same material.
That's why we built Superintelligence 101 — a guided book club that takes you through If Anyone Builds It, Everyone Dies chapter by chapter, with a group.
Work through the entire book at a steady pace, one chapter at a time.
Talk through the arguments with other students who are reading alongside you.
Go deeper on any topic with a tutor that knows the book inside and out.
Guided prompts to help you think critically about each chapter's claims.
What AI is, how it's built, why even its creators can't fully understand it, and why aligning a superhuman intelligence with human values may be impossible.
A detailed, parable-like scenario of how a misaligned superintelligence could end everything — not a prediction, but an illustration of the dynamics at play.
The hardest questions: is alignment even solvable? Can humanity coordinate a global response? And what would "shutting it down" actually look like?
Explore the arguments. Form your own opinion. Join the conversation.