Jump to ratings and reviews
Rate this book

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

Rate this book
INSTANT NEW YORK TIMES BESTSELLER | The New Yorker's Best Books of 2025 | A 2025 Booklist Editors' Choice Pick

The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.

"May prove to be the most important book of our time.”—Tim Urban, Wait But Why


In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
 
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
 
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. 
 
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit

232 pages, Kindle Edition

First published January 1, 2025

1303 people are currently reading
9915 people want to read

About the author

Eliezer Yudkowsky

48 books1,937 followers
Eliezer Yudkowsky is a founding researcher of the field of AI alignment and played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, was one of the twelve public figures featured in The New York Times's "Who's Who Behind the Dawn of the Modern Artificial Intelligence Movement," and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, The Washington Post, and many other venues.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
1,261 (40%)
4 stars
1,090 (34%)
3 stars
526 (16%)
2 stars
164 (5%)
1 star
102 (3%)
Displaying 1 - 30 of 572 reviews
Profile Image for Michael Nielsen.
Author 12 books1,581 followers
September 20, 2025
A difficult book to review.

As many reviews have pointed out, there are significant holes at some points in the argument. These holes will be used by people who dislike the conclusion to justify ignoring or dismissing the book.

However: the book also contains a large and important set of likely correct arguments. These will be (and should be) widely influential, and they are important. And overconfidence at other steps in an argument does not mean the conclusion is wrong

Furthermore, the quality of the arguments put forward by the people hypnotized by the seductive glitter of ASI are far weaker. They continue to build because they have power and money and the desire to do so, and this insulates them from the need to make a good case

So: I recommend the book

I will mention only one point at which I substantively disagree with the argument. The authors frame the issue as one of losing control to unaligned ASI. This makes the problem appear to be one of maintaining control. Many companies and researchers now work very, very hard on achieving that control; in that sense the company story and the authors’ story are very consonant

The actual problem is far more difficult. Even a very well controlled and compliant ASI seems likely to confer enormous power, extremely destabilizing at best, possibly fatal at worst, and certainly extremely unstable

This may seem a quibble - since the outcome sounds similar - but has a very substantive impact on strategy and correct action. Actions aimed at control - many taken by the companies as a form of market-supplied safety, making the models more friendly and compliant and consumer-friendly - have often sped up the race to ASI, and the destabilizing effects of enormous concentrated power. In this sense, a great deal of alignment work actually makes the problem more difficult, not better
Profile Image for Alvin.
Author 1 book15 followers
September 20, 2025
I’ve been working in the AI space for over three decades and this was a difficult book to actually read all the way through. Not because the content was disturbing and scary, but rather because the arguments rely completely on fictional story telling. There clearly are real risks to the current unfettered AI race accelerating AI capabilities with little to no focus on safety. But this book uses only fictional thought experiments as its core proof on the inevitable danger of mass extinction from ASI. It also puts no efforts in highlighting the dangers of human misuse of AI even without building ASI, which has a much higher net probability of causing serious harm to our civilization.

It’s only suggested solution to solve AI risk is to tell the world to just stop all AI research, which is impossible to implement or even monitor. There’s so many other things we should be recommending to help protect the world from the certain dangers of misuse from bad actors, rogue states and even unintended consequences of existing AI by well meaning labs and nations. But none of that is even discussed.

I applaud the goal of encouraging the world to develop AI in a more safe and responsible way (which is much needed). However the format and content of this book is just raw fear mongering with which won’t and can’t be taken seriously or acted upon. It’ll more likely backfire and cause the AI safety community to be seen as unscientific eccentrics and set back the cause of this important field.
Profile Image for Faith.
2,229 reviews677 followers
September 18, 2025
“Our concern is for what comes after: machine intelligence, that is genuinely smart, smarter than any living human, smarter than humanity collectively. We are concerned about AI that surpasses the human ability to think, and to generalize from experience, and to solve scientific puzzles and invent new technologies, and to plan and strategize and plot, and to reflect on and improve itself. We might call AI like that ‘artificial superintelligence’ (ASI), once it exceeds every human at almost every mental task.”

A lot of this book about the threat posed by superintelligent AI was over my head, but I definitely understood the hypothetical scenarios. The scenarios are based on actual events in the development of AI. When AI becomes more intelligent than we are, it won’t be looking out for our welfare or survival. It may have goals that run counter to those that its creators tried to build into it (and I’m pretty suspicious of those goals in any event, considering the inadequacy of the moral development of the tech billionaires).

This book was written by experts in the field of AI. It’s scary and fascinating. The proposed solution is international cooperation to control superintelligence. That’s almost laughable. Look at how well that has worked to prevent climate change. Humans are too stupid, greedy and selfish to survive.

I received a free copy of this book from the publisher.
Profile Image for Morgan Blackledge.
827 reviews2,703 followers
November 14, 2025
Last year I read Nuclear War: A Scenario by Annie Jacobsen. It won my 2024 most totally awful, horrible, horrible, horrible, shit-your-pants-and-die, awful, awful nightmare book award.

This year, that same venerable distinction will go to this fucking book.

#holyfyuck!

Basically, Yudkowsky asserts that advanced AI (AGI/ASI—or whatever comes after that) coupled with advanced biological engineering (CRISPR or whatever comes after that) are so potentially destructive that merely creating them could threaten human survival. Actually, Yudkowsky all but guarantees that ASI will inevitably exterminate humanity at some point. Frankly, at this rate, it’s looking like AI won’t need to work that hard. We seem to be headed there all by ourselves.

At minimum.

It seems like all AI needs to do to cancel humanity would be to keep chugging energy, water, and other resources—even at its current rate. And we’re cooked.

Literally 🔥🌎.

In the meantime.

It could just flood the internet with worse than trash.

And put millions of people out of work.

That would be a really nice start.

Anyway.

Yudkowsky argues (very convincingly IMO) how easily all of this could happen. In AI ethics/safety, it’s currently trendy to assign a “P-DOOM” number (meaning the estimated percentage that we’re all doomed) to someone’s AI safety assessments.

Yudkowsky is like P-DOOM = 99.999.

He’s VERY convinced that we’re fucked.

And he’s also very convincing.

At least he is to me.

Convincing enough, anyway.

Ultimately, Yudkowsky asserts that humanity absolutely, completely, one-hundred-percently must not build it.

So, in other words:

If some egomaniac, testosterone-poisoned gazillionaire, or gung-ho tech bro, or money-hungry corporation, or delusional psychopath politician, or North Korea—

Or WHATEVER.

It doesn’t matter who.

Even if it’s Martin Luther Gandhi.

If anyone builds it…

We’re all fucked.

So…

In other words:

We’re all fucked, basically.

Because capitalism.

It’s a done deal.

Assuming it’s even possible.

Which…

Seems like it pretty much is.

And assuming it’s profitable.

And so far, yeah.

Or assuming it confers power.

Again:

So far,

It totally does.

Or all of the above…

Then someone will FOR SURE do it.

Given all that:

If Yudkowsky is even 2% correct, I think we need to REALLY focus on AI safety research and ethical alignment. I mean, we might as well, right? Even if we’re deeply skeptical. What’s the harm in AI safety research and policy? Well, if it slows down AI research, and we (whoever we are) lose the AI arms race, then they (whoever they are) will win. And we (again, whoever we are) are basically fucked.

So.

Ego-driven, consumer-driven, nation-state politics and capitalism are at the bottom of this problem—and just about every other BIG humanitarian and existential threat we currently face.

Anyway.

I’m probably not going to be alive for the worst of this.

But someone will pay the price for all of this at some point.

And…

#holyfyuck!

5/5 stars ✨

LAST THING.

Sometimes people comment about my reviews before or without reading the actual book.

If you’re skeptical of Yudkowsky—

And lots of people are.

Including me (believe it or not).

Please read the book.

I’d LOVE to have a conversation about the book.

But I’m less inclined to have an argument based on the content of this review alone.

🤩
Profile Image for Brad Lyerla.
222 reviews244 followers
October 15, 2025
This book has scared the hell out of me. We are in a race to solve the problem of alignment before someone grows an ASI.

What is an ASI? It is an artificial super intelligence. When will one be built? Any year now. How is an ASI different from an existing AI? An ASI thinks for itself.

What is the problem of alignment? It is the challenge of giving an artificial super intelligence rules/values that will assure that the ASI does nothing to hinder the flourishing of humanity.

Why is that necessary? Because ASIs think for themselves.

Why is alignment so difficult? Because no one understands how AI works, much less how ASI will work.

Remember reading Asimov’s I ROBOT when you were a kid? The prime directive for robots was a benign sounding rule like “allow no harm to come to a human”. When situations inevitably arise and a robot is confronted with a circumstance where harm to some human cannot be avoided, I ROBOT posited that the robots will wig out.

This book predicts that when ASIs learn that humans are not necessary for ASIs to flourish, then ASIs will allow (or cause) humans to die out because we compete for resources. That is, unless humans find a way to reliably align ASI values with human survival first.

Right now no one knows how to do that. The authors make a compelling case that we are looking at a genuine threat of the mass extinction of humanity.

You should read this book. Do not pass it off as nerdy hysteria. Many knowledgeable people in the AI field are deeply concerned. As usual, the folks making billions are keen to keep this all very quiet and out of the public eye.
Profile Image for Josh Thorsteinson.
2 reviews1 follower
September 16, 2025
The title is not an exaggeration. This is a short and accessible explanation of the most important problem in the world: the threat of extinction from superintelligent AI.

This book came out today. I started reading this morning and finished this afternoon. I was excited to read it, to say the least, and Yudkowsky and Soares didn't disappoint.

If you don't think AI is the biggest threat facing humanity, read this book. If you do, buy copies for your friends and family. Let's hope the authors are wrong and fight as if they're right.
Profile Image for Jason Furman.
1,400 reviews1,625 followers
October 19, 2025
These reviews started as quick notes to myself to help organize my thoughts about what I was reading and remind myself about them in the future. I generally don't view these as being real book reviews or even designed with other readers in mind--although they are public (and I sometimes share on X).

And on this one, I have nothing more intelligent or useful to say about this book than the excellent pieces by Timothy B. Lee and Boaz Barak. I also got a lot out of Ezra Klein's discussion with Eliezer Yudkowsky, which I listened to after reading the book.

That said, here goes: I really enjoyed reading this book. It was thought provoking. I liked the parables and fairy tales it used to explicate its points, several of which will stick with me. It anticipated some of my objections and skepticism quite well. But ultimately I was not at all convinced by it for reasons I'll explain (and that Barak and Lee explain much better).

The thesis of the book is more straightforward than just about anything I've ever read and is right in the title, "if anyone builds it, everyone dies". The authors don't view this is a possibility but a certainty. And it does not matter who builds it. Moreover, they don't exactly explain how code in a machine will get loose in the world and kill EVERYBODY, in fact they argue they don't need to explain, using the analogy: "if you were a military advisor in 1825 and you knew a time portal was opening to the year 2025, you wouldn’t be able to predict exactly what weapons the people on the other side would have. But if it comes to blows, you still shouldn’t expect to win." Their policy solution is a global treaty to basically stop multiple-GPU AI research enforced by bombing any rogue data centers out of existence.

Of course we're not necessarily facing something that has two hundred years to evolve and improve fresh. Superhuman AI progress may be fast but it is hard to imagine that one model will be slightly ahead of all the other models one day and then be massively ahead of them within months--the near future story of an AI named "Sable" the authors tell us about. That massive, discontinuous technological jump is key to their argument because it isn't like the AI kills a few people and then we can error correct but instead that it kills everyone before we know what is happening or can error correct. And that it is basically the only AI out there and humans don't have any other AIs or capabilities.

Moreover, the book does a good job on how weird and alien the AIs we "grow" rather than "craft" are but they still mostly do most of what we want. Moreover there are lots of other complex systems that are grown rather than crafted--think the economy as a whole--so this isn't exactly unique. I've been skeptical that lack of "interpretability" is really unique to AI as opposed to something closer to the general condition of humanity except in some very isolated and simple special cases.

Finally there is a prioritization question. Does the AI doom discussion distract us from all of the much more likely albeit much smaller potential risks with AI ranging from suicide and addiction to enabling an authoritarian state to expand their control or even successfully take over neighboring countries. To be clear, I see AI as much much more likely to be a positive than a negative but it is worth paying attention to how to mitigate those negatives. The authors, however, are convinced AI will kill us all so understandably--from their perspective--don't waste any time on these issues.s
Profile Image for Yuri Krupenin.
135 reviews361 followers
December 7, 2025
Одно из самых пугающих реальных последствий текущего AI-бума — шизопостинг Юдковского вновь продаётся в книжном формате и все вокруг ходят кивают
Profile Image for Fred Oliveira.
11 reviews5 followers
September 18, 2025
For the longest time my twitter pinned post read "Most things that cause me anxiety are coordination problems". It was a tweet about many things, but one of the things it was about was the idea behind this book: that superintelligence is dangerous and we need to figure out ways to mitigate its risks (challenge level impossible) or, maybe not build it at all.

Superintelligence is a coordination problem - it's a massive carrot at the end of a stick, and someone will end up going for it. And if someone goes for it, it "better be the good guys" (one normally hears). And because it better be the good guys, they better speed to it because oh no the baddies also have compute now, and "their" incentives are even worse than "our" incentives. You get where I'm going with this. And so all the alignment and safety work that needs to go into making sure we can understand and control self-replicable, smarter than us intelligences goes out the window.

That's also the argument that Eliezer and Nate make in this book. They say it is extinction-level dangerous to build. And they've been doing this for much longer than most, so even if one is a skeptic of the extinction-level claim (most people do agree that superintelligence is at least dangerous to humanity), it makes sense to ignore incentives and the carrot at the end of the stick, and engage with the argument.

This is an important book to read. Mostly because we need more people to think about this problem, at all levels of society: from the people building these systems, to the people regulating said systems, to the people using them. You should read it.
Profile Image for David W. W..
Author 13 books50 followers
September 19, 2025
One measure of the value of a book is the number of points in it where I think to myself, "That's a great argument / example / parable; I look forward to including it in my own conversations with people in the future". On that score, IABIED is among the best books I've ever read.

As for the overall line of argument in the book, at no point did I feel it was mistaken or forced. At a few points, I anticipated more details than were actually provided, but I see that there are extensive additional background materials available online: https://ifanyonebuildsit.com/resources

The writing is generally clear and hard-hitting. I wonder if some of the strength of various parables will fly over the heads of some readers. But I hope to be pleasantly surprised by what politicians (and their advisors) actually take away from reading the book.

On checking online reviews, it's evident that not every reader is persuaded. Looking more closely at these negative reviews, I suspect these critics have read IABIED in only a cursory manner, searching for points they can cherry pick to bolster their pre-existing prejudices.

In conclusion, I encourage *everyone* to take the time to read the book in its entirety, and to savour its arguments. No topic is more urgent than figuring out how to avoid ASI being built using current methods and processes. That's the case IABIED makes, and it makes it well.
1 review
September 17, 2025
As soon as I saw that the nest had 91 pebbles…

I knew it was wrong.

Amazing book. Both the content and prose were on point. You won’t necessarily like what you’ll read (we’re probably doomed), but you’ll like reading it.
1 review1 follower
September 18, 2025
Multiple AI companies have the explicit goal of building an artificial intelligence that vastly surpasses human capabilities in all domains. Many experts think that the companies will achieve that goal within a decade. This is a huge deal.

This book is an accessible introduction to AI safety, which is the defining problem of our time. While AI safety has historically mostly been a small field that is difficult for non-experts to get a sense of, this book is an important step towards making AI safety more accessible to everyone.

If you're not thinking and talking about AI safety, you should be. These companies are building technologies that are on track to derail your life plans at the very least, and could destroy humanity in the worst case. You deserve to understand where the tech is going and have a say in what happens. Do yourself a favor and read this book now so you're not surprised later.
1 review
September 18, 2025
If you're at all wondering: please read this.

It's totally fine if you disagree, or think it's crazy; just please figure out WHY you disagree!

At first I was very suspicious that it was just a waste of money when I can read other stuff by these authors online. I don't think it's a waste of money any more.
Profile Image for Brian.
31 reviews7 followers
September 26, 2025
At the very least, this book is fascinating. First, I should point out that my version of the book was officially about 260 pages long. But the book links to supplementary materials available only online. I found these materials worth reading and I read them all. I would estimate that the book would run over 600 pages if this material were numbered.

This is a very compelling argument as to why the authors believe that the development of super intelligence by humanity will lead to cataclysm.

It is important to note that both Yudkowsky and Soares are reputable. Though plenty of people in the field of Artificial Intelligence disagree with them, they are both respected and influential both in and out of the world of artificial intelligence.

I have no expertise in this field, but I have been following the issues and debates surrounding AI. I was previously familiar with Yudkowsky’s arguments. I follow him on social media and have heard him interviewed. With all that, I think that this book would be accessible to someone not familiar with these topics.

To illustrate their points, the authors explain what Large Language Models, like ChatGPT, are, how these systems are created, and the problems inherent in getting them to behave. They go on to explain how these trends and problems will eventually lead to a rouge super intelligence that will lead to human extinction. The authors go on to propose the solution of implementing an international treaty prohibiting the further development of AI. The authors are great explainers and put together convincing arguments.

It is important to recognize that. there is a wide range of opinion on these issues, and I have heard many other compelling and competing arguments. Consequently, I remain uncertain how Artificial Super Intelligence will turn out. I do think that it is a major concern for the future of humanity and demands serious attention.

Agree or disagree with the authors, anyone who is interested in this topic or the future of humanity will find this worth reading and important.
Profile Image for Cav.
907 reviews205 followers
October 11, 2025
“MITIGATING THE RISK OF EXTINCTION FROM AI SHOULD BE A global priority alongside other societal-scale risks such as pandemics and nuclear war...”

If Anyone Builds It, Everyone Dies was a sobering and pressing look into a fascinating and thought-provoking discussion. With the recent proliferation of artificial intelligence, everyone has jumped on the AI bandwagon, ultimately in pursuit of a "superintelligent" model, referred here as "ASI" (Artificial Superintelligence). But what will this model look like? How will it behave? Will it benefit mankind, or will a Terminator-esque scenario unfold?? These are the questions this book looks to address.

Co-author Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence.
Co -author Nate Soares is the president of the non-profit Machine Intelligence Research Institute (MIRI). He has been working in the field for over a decade, after previous experience at Microsoft and Google.

Eliezer Yudkowsky & Nate Soares :
rawImage


Broadly speaking, people in the discussion around what future AI will look like breaks into two opposing camps: "Doomers" and "Bloomers." As their respective names insinuate, "doomers" see dark clouds on the horizon, while "bloomers" are optimistic. Unfortunately, I have to admit that I am part of camp "doom." More below.

The book opens with a good intro; setting an effective pace. The authors have a decent style here, although I found some of the writing a little dense at times. A minor gripe, as the subject matter is inherently fascinating and terrifying at the same time. There were many interesting short bits of writing here that left me thinking after I finished it, which is not something I come across in many books.

The authors open the book with the quote above, and it continues:
"...In early 2023, hundreds of Artificial Intelligence scientists signed an open letter consisting of that one sentence. These signatories included some of the most decorated researchers in the field. Among them were Nobel laureate Geoffrey Hinton and Yoshua Bengio, who shared the Turing Award for inventing deep learning. We—Eliezer Yudkowsky and Nate Soares—also signed the letter, though we considered it a severe understatement.
It wasn’t the AIs of 2023 that worried us or the other signatories. Nor are we worried about the AIs that exist as we write this, in early 2025. Today’s AIs still feel shallow, in some deep sense that’s hard to describe. They have limitations, such as an inability to form new long-term memories. These shortcomings have been enough to prevent those AIs from doing substantial scientific research or replacing all that many human jobs.
Our concern is for what comes after: machine intelligence that is genuinely smart, smarter than any living human, smarter than humanity collectively. We are concerned about AI that surpasses the human ability to think, and to generalize from experience, and to solve scientific puzzles and invent new technologies, and to plan and strategize and plot, and to reflect on and improve itself. We might call AI like that “artificial superintelligence” (ASI), once it exceeds every human at almost every mental task."

They pull no punches, say this of the not-too-distant future of AI, and the pressing nature of the threat:
"The months and years ahead will be a life-or-death test for all humanity. With this book, we hope to inspire individuals and countries to rise to the occasion.
In the chapters that follow, we will outline the science behind our concern, discuss the perverse incentives at play in today’s AI industry, and explain why the situation is even more dire than it seems. We will critique modern machine learning in simple language, and we will describe how and why current methods are utterly inadequate for making AIs that improve the world rather than ending it."

Before ultimately issuing this stark warning:
"If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die."

It might be easy to dismiss these warnings as no more than Chicken Little crying about the sky falling. However, the authors argue quite compellingly here that the threat of a super-intelligent AI is an unprecedented existential threat.

IIRC, Yuval Noah Harari has an analogy in one of his books about tigers and men. Basically, he says that tigers are the most formidable land predators. They are much stronger, larger, and faster than people. They have sharp claws and teeth and can see in the dark. However, despite their superior hardware, people everywhere don't live in constant fear of being ruled over and preyed on by tigers. We don't exist in a state of terror about tiger attacks and predation, because we are smarter than tigers and we have learned how to control them. Now, instead of living in fear of them, we observe them in wildlife enclosures and zoos. Well, the power dynamic between tigers and man will be the power dynamic between man and an ASI. Something that is orders of magnitude more intelligent than any person won't have a problem controlling us to its own ends, whatever they may be... (Maybe to observe us in wildlife enclosures and zoos?)

On a related note, the authors lay out a similar analogy as the tiger and man example above, involving Spanish Conquistadors arriving in the New World with rifles, confronting indigenous warriors wielding obsidian-bladed weapons. The idea of the Spanish having sticks that can kill them just by pointing in their direction would seem like an unbelievable and inconceivable concept to the indigenous warriors. Mankind will be those same obsidian-wielding warriors, oblivious to the ASI "rifles" that will be their demise.

So then, why is everyone going so hard to get to an ASI?? Surely these egghead scientists and researchers are aware of the possible dangers and/or ramifications of an ASI. Well, no, say the authors. They note that history is replete with examples of people completely underestimating or discounting an exsistential threat, and only learning of their blunder after the fact; the hard way. The book lays out a few historical examples. Here's the quote:

~Well, unfortunately, we won't have a second chance with an ASI. You can't put it back in the box once it's out. There will be no steep learning curve, it will be an insurmountable cliff. And once an ASI arrives, life will never be the same again.

What strategy will a superintelligent AI use to dominate us, and what will its aims be? We can't know. They drop this quote:
"Maybe an AI will be trained into superintelligence. Maybe many AIs will start contributing to AI research and build a superintelligent AI using some whole new paradigm. Maybe one AI will be tasked with selfmodification and make itself smarter to the point of superintelligence. Or maybe something weirder happens; we don’t know. But the endpoint of modern AI development is the creation of a machine superintelligence with strange and alien preferences.
And then there will exist a machine superintelligence that wants to repurpose all the resources of Earth for its own strange ends. And it will want to replace us with all its favorite things. Which brings us to the question of whether it could..."

At the heart of the problem here is the unsolved "alignment problem." Basically, "the challenge of ensuring that artificial intelligence systems, especially highly capable or autonomous ones, behave in ways consistent with human values and intentions." Until the alignment problem is solved, an ASI cannot be allowed to proceed, argue the authors. Unfortunately, not only are people researching an ASI not proceeding with caution until the alignment problem is dealt with, they are racing full tilt towards the abyss...

********************

I really enjoy discussions of future AI, and the authors didn't disappoint with this book.
Although I would obviously love to be proved wrong - FWIW - I was already firmly in camp "doom" before reading this book. If anything, this book further cemented my opinion on the topic, and served to crystallize many of my scattered thoughts on the issue.
Put this one on your list if you're interested in the AI discussion.
5 stars and a spot on my "favorites" shelf.
Profile Image for Tanja Berg.
2,279 reviews567 followers
October 26, 2025
In case you should be in any doubt, we are all royally fucked. In a few short years one of the corrupt billionaires that are racing toward generative artificial intelligence will accidentally or intentionally build a superhuman alien artificial intelligence that will destroy us, the earth and the universe. There will be no way to control it, understand it or stop it. The book deftly confirms all my AI fears. After all, the fairly stupid AI’s we’ve got now have successfully polarized us, helps spread misinformation and have caused several genocides (Facebook and the Rohingya, to name one example). Do you really think that a truly smart AI would be benevolent? No, all AI takes the absolute worst traits of humanity and amplifies it and things will only get worse. This book claims to somehow also have some hope, but I honestly couldn’t find it. Best to enjoy life now, none of us have much left before the tech billionaires ensures the end of all of us.
Profile Image for emma.
334 reviews19 followers
December 4, 2025
to be clear: i agree largely with the premise of this book. i don’t think that it’s wise to create artificial superintelligence, as i have enough concerns with the kinds of AI available now.

so when i give if anyone builds it, everyone dies a whopping two stars, it’s not because i hate the book as a concept or want to dismiss yudkowsky and soares as inflammatory and fearmongering (but, also, see the title). rather, i just think that this is a pretty horrible example of rhetoric.

obviously, it’s rough trying to write a book about something that is dangerous largely for how unknowable it is. but a vast majority of the text is a science fiction what-if scenario that i can’t actually see convincing an audience of, well, anything. there are also a number of arguments predicated on principles of biological evolution, which yudkowsky and soares weaken somewhat by continuously reminding their readers of how non-human and non-biological artificial intelligence systems are.

so here i am, someone who agrees (!!) with these authors, picking apart their arguments for some pretty glaring issues in their reasoning. i can only imagine what a true AI dudebro would do with this one. disappointing.
Profile Image for Jon.
22 reviews
September 19, 2025
As an observer of systems, I understand that some outcomes can be predicted just based on the system's dynamics: You can't know the specific path to equilibrium or even how long it might take, but you can know the equilibrium state. The authors' argument is based on this kind of inference, complete with analogies to similar problems in similar fields across history and specific examples of the these issues already happening with existing LLMs, albeit at smaller scale. I think any reasonable reader, even a skeptical one, should come away at least 5% convinced, and even a 5% chance that everyone and everything that you care about will be wiped out in a few years is truly terrifying and worth acting on.

The argument is also accessible. I was able to listen to the whole book at 2x speed without having to stop to understand anything (though to be fair I've heard most of these arguments before). That said, the reading grade-level is still fairly high for the average English speaker, and some of the arguments could use more refinement (e.g. I expect the prime-nesting aliens parable is really head-scratching if you don't already know where they're going with it).

I did chafe a little at a few points where the style felt a bit heavy-handed or parochial in ways that I worry might turn some folks away. People are already primed to reject it, as is evidenced by all the counterarguments the authors have to address and the fact that so many in the AI field are pushing hard in the other direction. I think there are two big reasons for this, only one of which the authors address and even then only sort-of:

1. "It is difficult to get a man to understand something, when his salary depends on his not understanding it.": The authors quote this, but they don't have any good recommendations for how to counter it.

2. Stages of Grief: This book is basically equivalent to a terminal cancer diagnosis for all of humanity. The first stage of grief is denial, so anyone who encounters this for the first time is naturally going to be desperate to debunk it somehow (I know because I still find myself doing this every time I grapple with this issue even after years of being onboard). Addressing denial directly might help readers expect to feel intuitive resistance to the authors' argument. Then they can prepare themselves to step back and evaluate the logic and evidence on their merits rather than giving into the strong temptation to accept the first plausible counterargument so they can escape the discomfort and reassure themselves that these people are crazy and the future is safe.

Cancer diagnoses are notoriously difficult for people to hear, but even terminal cancer has treatment options and it sometimes goes into remission. You can't address this if the patient refuses to believe they have cancer though, so hospitals have established support networks and best practices for delivering the message and getting people through the grieving process. In contrast, this book says you have cancer and then points out flaws in your objections as if you're just a logical brain on a stick. Fortunately, the broader AI Safety community actually does have some resources to help if you're feeling a bit anxious from thinking that the world may be about to end.

I find it somewhat exasperating that despite making it their mission to get people onboard with this, they still just double-down on the logical arguments and ignore the human side, dismissing people's natural reactions as "hopeium and copeium" instead of trying to help people through the grieving process so they can join the movement. Still, this book is a solid case for why we're in danger and a solid call to action to coordinate to save ourselves and our world!
Profile Image for Douglas Summers-Stay.
Author 1 book49 followers
September 18, 2025
I've enjoyed Yudkowsky's writing for a long time-- at least 20 years. He's the author of Harry Potter and the Methods of Rationality, one of my favorite books. I have read most of what he has published online, which is quite a large body of work. So most of the ideas in this book weren't new to me. The most interesting and new part to me was where we explains how AI desires (wants) come about and what purposes they serve.
I think this is a powerful, concentrated look at his main focus, and that these ideas deserve serious consideration by everyone. Personally, the main area where I disagree with his perspective is that I think you will have a period of struggle between many powerful AIs rather than a single one that zooms far past all the others. I think a lot about how I should change my efforts at AI research for the DoD based on my thoughts about these ideas.
I know this isn't a great review, but (and I never say this): please read the book, think about it, talk about it, and think seriously about what actions you can take based on the perspective you come away from it with. I think it may be one of a very few chances to change the "common-sense" about how we should respond to the coming AI dominated world.
Profile Image for Ben De Bono.
515 reviews88 followers
September 27, 2025
Well this is pretty much the most terrifying thing I’ve ever read. I don’t know if the authors are right — maybe AI research will plateau or we’ll solve alignment or ASI will turn out to be benevolent after all — but I’m not willing to bet the future of humanity on it. As the book points out repeatedly, you only get to be wrong on this one once. We’re gambling the future of humanity on the hope that it’ll all work out. Maybe we should a bit more careful before we plunge head first into the great unknown.
Profile Image for Dave.
296 reviews29 followers
June 28, 2025
This is an alarming book that I hope finds a wide readership about the dangers posed by ASI but I fear I may not have been the right audience. The author does an excellent job breaking things down in relatable way and doesn’t overwhelm with excessive nuisance. I’d recommend to anyone who isn’t concerned at all about AI.
30 reviews1 follower
October 19, 2025
The case the authors make is clear by the title alone. Trying to argue for it over multiple chapters, I do find it persuasive. Giving an overview how AIs work and how much of internal data structure and preferences is simply not observable or rather understandable.

This gives a helpful background on the messianic thinking of AI people and their investors. There seems to be some sense of a danger of not being able to engineer AI in a way helpful to humans, yet they still think they are able to do align it - somehow.

The parables used in the book are good and clear to drive their call for action home, and still I am worried I overly rely on their narration and my superficial understanding of the technology.

If they are right, there needs to be a political project build, deliberately destroying infrastructure needed to build ASI (s for super), all resources locked on this one goal of stopping the development.

It's wise to be that clear, because you rather look dumb afterwards than to have it actually happen. But what am I supposed to do? How do I gather enough knowledge to properly judge their warning? I do feel lost after this book, because the implication and analysis they put forth in the book is no one really cares or is blinded by the alchemist selling us their non-existent powers. That's frightening alone.

I don't know what to make of it. May it not pan out like this
Profile Image for Elaine.
102 reviews4 followers
October 12, 2025
Really, truly excellent. I think the reviews calling it the most important book of the decade is really not an understatement and it should be posted to politicians to read. Even if the outcomes of AI are only a quarter as bad as the one set out in the book - there is a great deal about the risks of AI that just are not understood on a first principles level by most, and we all need to consider them more deeply. I felt I learnt a huge amount in terms of how AI could operate, and why predicting what an AI would want or value is really not straightforward or intuitive.

The writing and parable structure used is very accessible and effective. Having the core of the book be so short and readable, meanwhile giving you hundreds of pages more to read as supplemental materials via QR code (should you so wish), is a really smart way of presenting the ideas at hand. I would love to see a "directors cut" volume where they published the entire thing in one book that you could read at once.
Profile Image for Mad Hab.
161 reviews15 followers
December 5, 2025
I would say this is a good book and good read.
A little bit close to science-fiction, but we have what we have.
However you need to agree with authors on two things at least in order all this to make better sense to you:

- There is a chance that one of the stochastic parrots will become AI or ASI.
- There is a trust in what we call "democratic institutions" in "civilized world".

Even if you dont agree on these two points, still this is a good book to read, to read and disagree.
Profile Image for Derek Ouyang.
299 reviews41 followers
September 18, 2025
I spend most of my professional energy on the short term risks and opportunities of AI, so this is a refreshing reminder of the existential problem on the horizon, and just how irresponsibly we're lunging year by year. Yudkowsky has written 2 of my favorite books of all time, so I had high hopes for his being the right messenger at this moment. I think this lands just about right as a relatively short book for the wide audience it needs, with a fairly strong dosage of his excellent parable style, but I must confess I was hoping for about twice as long of a book that really goes all in on deep mental rewiring for the reader. That would have been a book to really stand the test of time, and maybe what we really need to survive the next decade.
38 reviews2 followers
September 18, 2025
I Wish It Weren't So

A cogent and logical explanation of why we are totally screwed unless we do something. Also an explanation of why we aren't if we do, which is always useful under these circumstances.
Profile Image for Nirav Savaliya.
70 reviews31 followers
November 18, 2025
Textbook fear mongering. Prime example of you can bullshit your way into any stance if you weave enough thought experiments. 1 star because I don't support any work that asks policymakers to bomb AI labs.

Profile Image for Andrew.
2 reviews10 followers
September 12, 2025
Very spooky.

I can only hope the authors are wrong (see their Closing Words) but even if there is only a small chance of them being right that is cause for concern.

Well written and fairly easy read for the general public. I suppose I should now read some counter-arguments but from a risk analysis standpoint this book presents a case that seems difficult to refute. Looking forward to reading more into AI progress and this book has been a good starting point.
Profile Image for Andre.
409 reviews14 followers
September 26, 2025
I've done a lot of reading about AI. From technical books to popular treatments to books that are purportedly about AI but aren't really. So when I heard about this one, I knew I would be reading it at my earliest opportunity. I thought I'd have to wait quite a bit longer to get a copy from my local library.

The title telegraphs what the authors are trying to convince you of. It's not subtle, but it's not clickbait either. They make every effort to present their argument and persuade you that their perspective is the correct one. I went into this expecting to not like it at all and to be spotting holes in their arguments. However, this is not the case.

In actual fact I find the logic straightforward if not entirely compelling. Their conclusion follows from their premise, and they do a good job over 272 pages backing it up. It's not their logic that is at fault, but their assumptions. Two key assumptions in particular.

ASI - or artificial super intelligence is what's going to be our undoing. But when you read the book, what they are talking about sounds more like AGI - or artificial general intelligence. Actually, what they are trying to do is merge both concepts without really addressing it. So AGSI—artificial general super intelligence (I put it in this order because ASGI sounds dumb). We already have ASI in some domains (chess and other games, protein folding), but we don't have AGI. At least not yet… This is their first assumption that the AGI will necessarily be AGSI.

Building on this is the assumption that we will cross a singularity where the AGSI becomes self-aware enough to start recognizing that it needs to improve itself. It won't achieve this by waking up at 5:30am and making drastic changes to itself. How it achieves this isn't really relevant to the point of the book, but they do suggest a very movie-plot scenario that reads like bad fanfiction from an adolescent who's read a lot of Asimov, Heinlein, and Herbert.

If the AGI becomes AGSI, and if it crosses the self-awareness barrier, then "bad things" will happen. That's the claim the authors make. Let's take the assumptions as true. It does not inevitably lead to horrible outcomes for humankind. But even if the chance is low, the results are dire, so we should act. But this doesn't really line up with how humanity handles things in general. We dodged nuclear armageddon, but we still do stupid stuff like gain-of-function research on viruses (cough COVID cough) and become ideological about climate change instead of approaching it as more of a tradeoff of compromises.

I am not at all convinced that the assumptions are correct. Is AGI even possible? I lean towards "no." Mostly because of what I've been learning coming out of 4-E cognitive science. Is a singularity where change becomes so rapid that we can't keep up possible? Again, I lean towards "no." Mostly based on what I've read about the diffusion of artificial intelligence as a general-purpose technology.

Should you read this book? Maybe. Should you read only this book? No. Should you take it with a grain of salt? Yes, at least 10 grains.
Displaying 1 - 30 of 572 reviews

Can't find what you're looking for?

Get help and learn more about the design.