Alvin’s review of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All > Likes and Comments

59 likes · 
Comments Showing 1-12 of 12 (12 new)    post a comment »
dateUp arrow    newest »

message 1: by Darrell (new)

Darrell Cookman Alvin, if you’ve been in the space for 3 decades, then u know the authors and their credentials. Why are you picking at writing style choices instead of acknowledging the existential threat in the room? You sound like someone with relevant experience; don’t diminish your impact by saying these guys can’t read the future very well. No one can, however, they’re putting together a cogent argument.
Saying “ nuclear arms treaties don’t stop gun crime” doesn’t help anything.


message 2: by Özgür (new)

Özgür Why are you leaving hostile comments to any review not inline with your review? Doesn’t really look like adult behaviour.


message 3: by Alvin (new)

Alvin @darrell, yes as I mentioned in my review, I applaud the highlighting of the need for more AI safety considerations from the authors. But given the serious nature of this topic, it deserves a serious and considered approach to addressing them. Not relying on hyperbolic parables as its supporting evidence. The reality is that there are plenty of scholarly papers that discuss this topic. It’s very odd none of them are included or cited in this book. Their approach is a disservice to the cause of added safety because it will make the field look non credible. It’s not a personal issue with either authors.


message 4: by Robert (new)

Robert I'm extremely disappointed by your review, Alvin.
SPECIALLY, to dismiss the book as “just raw fear-mongering which won’t and can’t be taken seriously or acted upon” is a remarkably shallow critique. But I suppose I shouldn’t be surprised. You appear to have a clear conflict of interest and seem to have deliberately chosen to give a negative review.
You're promoting your own book, Our Next Reality: How the AI-powered Metaverse Will Reshape the World, which claims:
“Powered by immersive eyewear and driven by interactive AI agents, this new age of computing has the potential to make our world a magical place where the boundaries between the real and the virtual, the human and the artificial, rapidly fade away. If managed well, this could unleash a new age of abundance.”
It seems that anyone operating in this circuit—AI, venture capital, public speaking—feels compelled to paint an overly optimistic picture and to discredit voices like Hinton, Yudkowsky, Bengio, etc., labeling them as “doomers” unworthy of serious consideration. If you don’t toe that line, you risk being cast out.
This is typical of the Techno Bro mindset—invested in venture capital, profiting from lectures on virtual reality, and consciously or unconsciously resistant to anything that might threaten their narrative or what they are selling/investing


message 5: by Taysky (new)

Taysky I don’t disagree with the conclusion but this rambling sci-fi argu by analogy mess doesn’t get a single person any closer.


message 6: by Nancy (new)

Nancy Lawler I had to force myself to endure the book’s rambling, almost sci-fi style because I genuinely wanted to understand Yudkowsky’s insights into the dangers of advanced AI. It had not occurred to me that there are so many players beyond the major hyperscalers—some with far less rigorous safety standards—making containment increasingly risky.
I now better grasp how goal-directed behavior in an artificial superintelligence (ASI) could lead to catastrophic unintended consequences. However, I disagree with Yudkowsky’s framing that such a system would “want” anything. That wording anthropomorphizes the technology and clouds the real issue: AI does not have desires, but it can still produce outcomes that appear purposeful due to its optimization objectives.
Unfortunately, the book’s effectiveness as a call to action is undermined by its over-the-top tone and self-dramatizing style. The message about AI safety is important, but its delivery makes it difficult to reach a broad, skeptical audience.
On a lighter note, the audiobook narrator, Rafe Beckley, delivers such an intense and theatrical performance that he sometimes sounds like Boris Karloff reading The Grinch. That at least added a touch of entertainment to an otherwise unnerving topic.


message 7: by Ori (new)

Ori Nagel Motivated reasoning, par excellence.


message 8: by Lucas (new)

Lucas Matuszewski ASI doesn’t exist yet, and frankly, we don’t really know what it will become. So, there’s no better way to proof the risk exists than through thought experiments, historical parallels, and logic.

We all appreciate numbers and data, but in this case, they’re essentially useless because ASI doesn't exist.

If we accept that the RISK EXISTS, then we all need to act now.

It doesn’t matter if one Nobel laureate “estimates” 5% risk, or another 90%. But what IS the risk? Extinction? Or "just a few" deaths? How many millions of people are humanity willing to sacrifice to achieve ASI?

It feels like a reckless gamble.

If a risk exists -and I’m certain it does - we can’t just sit back and ignore it.

However, I agree that the proposed solutions seem unrealistic. We need a better approach. Perhaps focusing more intensely on Human Super Intelligence could give us a better chance.

We can’t be pushing ahead with ASI while our own thinking remains so limited and naive. It simply won’t end well…


message 9: by Sven (new)

Sven Thank you for speaking up against fear-mongering, even though the backlash seems unavoidable these days. The basic ideas behind building artificial neural networks are already out there — so how would anyone even forbid an idea? And who would enforce a global stop to AI development? What about hobbyists and open-source work? Who exactly are we imagining giving that much authority? That sounds uncomfortably close to step one toward an Orwellian thought-police.

And realistically, is it even possible to enforce a halt to AI research when we’ve seen how ‘well’ nuclear non-proliferation has worked out — and nuclear facilities are much harder to hide than the hardware needed for AI research.

It might be an unpopular opinion these days, but are we not in danger of starting a modern witch hunt around AI?


message 10: by Jen (new)

Jen Fictional thought experiments are how Einstein came to many of his true theories. A fictional thought experiment in and of itself doesn’t make a bad argument.


message 11: by Wolfgard (new)

Wolfgard Braun Thanks for the counterpoints. Now I have a bit more hope.


message 12: by Carlos (new)

Carlos Sánchez Ávila I actually love this book because it implies people like you will disappear.


back to top