AFFINE - Superintelligence Alignment 101

#1

Introduction

Welcome
Lens
A.I. - Humanity's Final Invention15 min+7 min
Kurzgesagt – In a Nutshell

Humans dominate Earth not because we're the strongest or fastest, but because we're the best general problem-solvers. Kurzgesagt explores what happens as AI moves from narrow tools toward something more general — and why digital minds could scale in ways biological ones can't.

Existential Risk from AI39 min+19 min
Wikipedia

The concern isn't that AI will "turn evil." It's that a system pursuing whatever goals it has might find that humans are in the way — and be capable enough to act on it. This overview covers the core ideas behind AI as a source of large-scale risk, from misaligned goals to the difficulty of staying in control.

10 Reasons to Ignore AI Safety15 min+8 min
Robert Miles AI Safety

"It's too early to worry." "Just don't give it bad goals." "We can always pull the plug." Robert Miles takes on ten common reasons people dismiss AI safety — and shows why each one is harder to wave away than it sounds.

Four Background Claims11 min+5 min
OptionalNate Soares

What makes AI safety worth working on now, before systems are powerful enough to be obviously dangerous? This article lays out four premises that underpin the case — from why smarter-than-human systems could emerge to why we can already do meaningful work to prepare.

Worst-Case Thinking10 min+5 min
OptionalBuck Shlegeris

In AI safety discussions, people often assume the worst. But different people do this for different reasons — some as a precaution, some because they think worst cases are likely, some because the stakes are too high to gamble on. This essay unpacks what's actually going on when someone reasons from the worst case.