If Anyone Builds It, Everyone Dies

The best way to read the book that's shaking the AI world

NYT bestseller by Eliezer Yudkowsky & Nate Soares

If Anyone Builds It, Everyone Dies — US edition cover

US edition

If Anyone Builds It, Everyone Dies — UK edition cover

UK edition

Why this book matters

If Anyone Builds It, Everyone Dies is the most talked-about book on AI existential risk. Published in September 2025, it hit the New York Times bestseller list within weeks and sparked intense debate across tech, policy, and mainstream media.

Yudkowsky and Soares — co-leaders of the Machine Intelligence Research Institute — walk through the theory and evidence for why building superhuman AI could end everything. They present a concrete extinction scenario, then lay out what it would take for humanity to survive.

Whether you end up agreeing with their conclusions or not, these are arguments you need to engage with seriously. This is the defining debate of our time, and this book is the sharpest articulation of one side of it.

Buy the book

To take this course, you'll need your own copy of If Anyone Builds It, Everyone Dies.

Don't just read it alone

Instead of just buying the book and reading it alone, you can get much more out of it by reading and discussing it with others.

The arguments in this book are dense and layered. It's easy to miss nuance reading solo — or to get stuck on a point that a quick conversation could clarify. Forming your own opinion is easier when you can hear how others react to the same material.

That's why we built Superintelligence 101 — a guided book club that takes you through If Anyone Builds It, Everyone Dies chapter by chapter, with a group.

Chapter-by-chapter progression

Work through the entire book at a steady pace, one chapter at a time.

Weekly group discussions

Talk through the arguments with other students who are reading alongside you.

AI tutor

Go deeper on any topic with a tutor that knows the book inside and out.

Discussion questions & companion materials

Guided prompts to help you think critically about each chapter's claims.

What you'll cover

Part IChapters 1–6

The Case

What AI is, how it's built, why even its creators can't fully understand it, and why aligning a superhuman intelligence with human values may be impossible.

Part IIChapters 7–9

One Extinction Scenario

A detailed, parable-like scenario of how a misaligned superintelligence could end everything — not a prediction, but an illustration of the dynamics at play.

Part IIIChapters 10–13

Where We Go From Here

The hardest questions: is alignment even solvable? Can humanity coordinate a global response? And what would "shutting it down" actually look like?

Ready to read it with others?

Explore the arguments. Form your own opinion. Join the conversation.