Jump to ratings and reviews
Rate this book

Better without AI: How to avert a moderate apocalypse... and create a future we would like

Rate this book

Paperback

2 people are currently reading
24 people want to read

About the author

David Chapman

145 books19 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
4 (40%)
4 stars
3 (30%)
3 stars
2 (20%)
2 stars
1 (10%)
1 star
0 (0%)
Displaying 1 - 3 of 3 reviews
Author 1 book7 followers
November 15, 2024
This book changed my thinking about the threat of AI more than any other book I have read in the past 20 years. If you even suspect you might find it useful, read it! If you are unsure, start reading it at the author’s website and I suspect you will want to finish it.
Profile Image for Brigitte Gemme.
Author 1 book15 followers
July 17, 2024
A friend who is more deeply versed in AI issues than I am mentioned this book to me - though he hadn't read it! - and, in light of the author's background, I was intrigued by the bold title. I don't exactly regret reading it, though at times I did find the style a bit impetuous and Chapman a bit pompous. I feel that the key message of the book gets somewhat diluted and could have been more sharply presented in a long article. Still, I appreciate that the author helped me form clearer thoughts about the biggest issues with AI advances.

Here are my three take-aways, in my words inspired by Chapman's views:
1. Stop worrying about "scary AI" (paperclip-apocalypse style) and AGI. Those are distracting us from seeing the most pressing problem: existing AI applications that have become deeply embedded into our daily lives through the content recommender engines of Mooglebook (I would make it Amooglebook to include Amazon) and are already deeply manipulating our thoughts and disabling our institutions for their (shareholders') benefit. The public good is sacrificed to whatever makes the most money for the advertising industry.

2. The promise of a solved world thanks to AI advances, for example with an increase in scientific productivity leading to disease eradication, is greatly exaggerated. It's a mirage promoted by either naive or deceptive agents in the AI sector, and lulling us into complacency in the face of the highly problematic issues of existing AI technology.

3. This third one is my thought arising from the book, not explicitly mentioned by Chapman: energy-hungry AI-powered recommender engines are driving up carbon emissions, further disrupting the climate, to manipulate us to buy more and more material goods that are made by intensively despoiling the earth and eliminating a growing number of species. No need to wait for the paperclip apocalypse! It's already here, except instead of paperclips it's fast fashion, pet toys, and cheap wearables. Great.

Among other suggestions, Chapman proposes that we should be fighting back by actively criticizing existing AI uses, extirpating ourselves from the surveillance of recommender engines, pushing for advances in cybersecurity, and calling for increased regulation. I can't say that I am filled with hope that a movement will arise to carry that banner forward, but it's good to at least have put words on my concerns.
Profile Image for Timon Ruban.
156 reviews26 followers
April 21, 2024
I am not afraid of the impending AI-pocalypse and I wanted to give someone the chance to make me so. Unfortunately, but not surprisingly, given the fast moving nature of recent advances in LLMs, many of the claims in the book already seem out of date a few months after publishing.

What resonated?
- The idea that the real risk of AI (as of any other technology) comes from creating new pools of power. Whether that power is exploited by a mind-like AI or your evil neighbor should worry you all the same.
- Chapman harping on similar themes like Tim Urban's fantastic What's Our Problem?: A Self-Help Book for Societies. Rather than being afraid of some future scary AI, be afraid of the AI that controls the information you consume today. These recommender systems present across all social media exploit the worst of human psychology, lead to lower-level thinking and undermine the shared myths our most important institutions are based on. This does not seem in line with our evolved best own interests.
- Research on mechanistic interpretability is important. ✅
Displaying 1 - 3 of 3 reviews

Can't find what you're looking for?

Get help and learn more about the design.