Jump to ratings and reviews
Rate this book

AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference

Rate this book
From two of TIME’s 100 Most Influential People in AI, what you need to know about AI—and how to defend yourself against bogus AI claims and products.

Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.

While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it’s being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice. The book explains the crucial differences between types of AI, why organizations are falling for AI snake oil, why AI can’t fix social media, why AI isn’t an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. The book also warns of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies.

By revealing AI’s limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home.

384 pages, Paperback

First published September 24, 2024

869 people are currently reading
7503 people want to read

About the author

Arvind Narayanan

4 books59 followers
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He was one of TIME's inaugural list of 100 most influential people in AI.

Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was also among the first to show how machine learning reflects cultural stereotypes.

He was awarded the Privacy Enhancing Technology Award for showing how publicly available social media and web information can be cross-referenced to find customers whose data has been "anonymized" by companies.

Narayanan prototyped and developed Do Not Track in HTTP header fields.

He is a co-author of the book AI Snake Oil and a newsletter of the same name which is read by 50,000 researchers, policy makers, journalists, and AI enthusiasts.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
580 (25%)
4 stars
996 (44%)
3 stars
558 (24%)
2 stars
85 (3%)
1 star
25 (1%)
Displaying 1 - 30 of 377 reviews
Profile Image for Jason Furman.
1,413 reviews1,700 followers
November 2, 2024
Some of AI Snake Oil is very good, including its skepticism about AI hype, an excellent chapter on the limits of AI doomerism, and a focus on how AI is used by humans rather than its autonomous capabilities. But much of the book—including its ultimate recommendations—is deeply misguided, reflecting a misunderstanding of capitalism, a mix of concerns not really AI-related, a one-sided review of the evidence, and a failure to compare AI to the alternatives—namely flawed humans and non-AI technologies. They are also more skeptical about progress in AI than I would be, though I don’t have strong convictions about who is right on this.

The book opens well by pointing out that AI is an overly broad term which confuses debates about it, analogizing it to a world where we only used the word “vehicles” and some people arguing for their efficiency meant bicycles while their debate opponents were focused on SUVs.

They distinguish between predictive AI, generative AI, and social media content moderation AI (in a chapter that feels out of place). They argue that much predictive AI is based on unreproducible papers with several errors, including testing on training data (“leakage”) and lacking structural models that break down when behavior changes. Moreover, companies deploy and sell systems that are often untested and sometimes aren’t even AI (occasionally with humans behind them) as part of widespread AI hype.

I found this mostly compelling but disagreed in places. They criticize flawed machine bail decisions without engaging with literature showing how it can improve or comparing to how terrible human judges are with their limited time and information. They discuss an AI hiring system that can be gamed based on attire or interview language—again something humans do too, probably worse. They’re overly fatalistic about prediction: while perfect prediction is impossible, we can do better than coin tosses and provide uncertainty estimates for users to weigh errors.

They’re more positive about generative AI except regarding what they view as large-scale intellectual property theft. While I haven’t settled my views here, I’ve long thought IP protections are overbroad and hinder innovation—my instincts lean that way on generative AI too, though I’m uncertain. People get enormous growing benefits from generative AI; if stricter IP protections just shifted rents that might be acceptable, but radically reducing innovation would be problematic.

Their chapter on AI existential threat is masterful and should be widely read. They effectively critique doomer arguments: they expect only incremental progress toward AGI, note that AI risks can be fought with better AI making unilateral disarmament counterproductive, argue “alignment” is premature given unknown future technologies, contend that paper clip-maximizing AI couldn’t exist without human-like understanding, and emphasize focusing on human misuse through measures like restricting bioweapon ingredients.

Their deeper flaws emerge from skepticism of capitalism that leads to indefensible positions. They criticize OpenAI’s Kenyan data annotators earning $1.46-3.74 hourly while engineers make nearly million-dollar salaries at an $80 billion company. This is pure demagoguery—the relevant comparison is to these workers’ alternatives, not to AI engineers. Even their criticism that “data annotation firms recruit prisoners, refugees, and people in collapsing economies” could be read positively: AI creating employment for the least employable is potentially beneficial.

The social media chapter focuses on Facebook’s Type I and Type II content moderation errors, but as they acknowledge, this mostly reflects human judgment rather than AI. They offer no real alternative to this complex task, noting Facebook couldn’t afford to handle 83 Ethiopian languages and moderate rare but crucial events. They praise Mastodon, which is far less usable than X and, by their admission, may not be scalable.

More broadly, they seem nostalgic for public provision, nonprofits, and smaller companies. They note “The early internet was funded by public funds and DARPA... before 1990s privatization,” overlooking that pre-privatization internet was barely accessible and limited in utility. Similarly, criticizing large AI companies ignores that they’ve produced the major breakthroughs.

They argue AI progress will be slow because profit-focused companies won’t invest in understanding how AI works. While true for some firms, well-funded companies with long-term horizons are likely to invest in understanding if it creates competitive advantage.

Some recommendations are sensible—like improving research reproducibility and enforcing deception laws. Others seem tangential, like supporting randomized college admissions above certain thresholds—an AI-irrelevant proposal they wouldn’t extend to bail decisions. Ultimately, what people attempt with AI, especially predictive AI, is challenging—but the alternatives are often worse.

[DISCLOSURE - I asked Claude "Can you do a very, very light edit of this" and posted that edit. I write these reviews very quickly, originally just did them for myself, and often have typos. Hopefully this eliminated the typos and improved the language a little--but also may have introduced some changes I didn't love because I didn't check Claude's edit carefully. My hope is the improvements outweighs th worsenings--but even better is if I had spent more time to take advantage of but not fully follow the AI edits.]
Profile Image for Sebastian Gebski.
1,259 reviews1,444 followers
Read
January 18, 2025
No star rating, as I've realized I'm not the target audience for this book (so it could have been misleading for someone who is).

Word of caution: this is NOT a book for the technical audience. This is not a book for someone who already has some engineering understanding of ML, recommendations, statistical prediction, etc. This is not a book for folks who do understand the basics of Gen AI: transformers, attention, token generation. This is a book for laymen (there's nothing bad in being a layman) who need a quick upskilling & are interested in the topic mentioned in the subtitle.

What did I wish for (& didn't get here)? I'd really appreciate some deep dive into the capabilities of modern models - e.g.:
- reasoning
- non-textual knowledge
- multi-sensoric knowledge
- brain throughput vs actual "AI" througput
- glass ceiling in Gen AI
- why AGI can't be achieved with the architectures we have
- what did we learn (if anything) about conscience - thanks to Gen AI

The most interesting consideration I've found here was about randomness ("luck") and why one can't rely solely on statistics (I don't just mean incorrect interpretation of statistics).

In the end, I can't recommend this one. But I can imagine that someone who's not into software/data engineering may have really enjoyed it.
Profile Image for Vinayak Hegde.
775 reviews100 followers
December 25, 2024
The book AI Snake Oil offers a comprehensive and critical analysis of the current AI landscape. Its strongest feature is the numerous compelling examples debunking exaggerated claims made by various AI products. The authors meticulously dissect these assertions, exposing fallacies and providing a more grounded perspective. They then delve into the root causes of misinformation and hype, highlighting how different types of AI are often conflated into a monolithic entity.

The authors categorize AI into three primary types: Prediction AI, which uses datasets to make forecasts or predictions; Generative AI, which is trained on data like text, images, or video and can produce new outputs such as text or images based on prompts; and Content Moderation AI, designed to monitor and manage online speech to prevent harmful content. This structured framework helps demystify AI’s capabilities and limitations.

The book’s analysis is both nuanced and contextual. Through numerous examples, the authors demonstrate how many AI models struggle with accuracy, especially when faced with evolving or culturally specific data. They highlight issues like the reproducibility crisis in AI research, where lack of access to code or datasets undermines the credibility of results. Another critical concern is data pollution, where training data unintentionally contaminates inference processes, further casting doubt on AI's credibility and reliability. These issues often render AI less a science and more akin to alchemy.

The authors also explore AI’s societal impact, from displacing workers to increasing productivity among existing employees, which can lead to reduced hiring. Certain tasks lend themselves more readily to automation, altering job roles and reducing worker bargaining power. This shift often benefits employers, reinforcing systemic inequalities.

The book concludes with potential remedies, advocating for thoughtful regulation, industry guidelines, and the introduction of randomness to disrupt feedback loops in processes like hiring and college admissions. However, the authors caution against pitfalls like regulatory capture—where regulators prioritize industry interests over public good—or regulations that favor established players, stifling competition and innovation.

Overall, AI Snake Oil provides a much-needed reality check, countering AI hype with informed, expert insights. It’s an accessible read for anyone seeking clarity on the promises and pitfalls of artificial intelligence.
Profile Image for Jean.
230 reviews
August 28, 2024
Fantastic book! EVERYONE should read it. Clearly and thoroughly sorts out the reality from the hype, explaining why we are where we are (some extremely problematic uses already exist, hence: "snake oil") and what the future may hold (no, giant, sentient robots aren't taking over). Excellent insights and discussions of the different forms of AI (predictive, generative, content moderation), the problems and promise of each, and how we might steer in the right direction.

Read this book if you're curious about AI, afraid of AI, have to make decisions about implementing AI, have kids, use social media, make policy, vote, wonder about AI in your work, are a journalist, are interested in tech, or just enjoy high-quality expository writing. Then sign up for the authors' newsletter.

I read an advance copy and reviewed it here: https://www.practicalecommerce.com/ai...
Profile Image for Subashini.
Author 6 books174 followers
January 23, 2025
I appreciated the insight into differentiating predictive vs generative "AI" but in general I found that this book suffers from the kind of breathless undergrad essay voice where they have many things to tell you but not enough time to do so, or something, so it feels very surface-level in parts. And overall I was a teeny bit troubled by how sanguine they were about the potentials of generative AI without having anything to say about the environmental costs of the tech. I know I read this book when I was tired and drowsy at night but I don't quite think I missed out on chunks of this book because I was in a stupor, either, so! A pretty hefty and telling omission. A little bit too "breathless about tech, and yeah capitalism is bad but we can do better!!!" for me. Maybe the book I want is a materialist narrative about "AI", and machine learning tech in general, and this book is not it.
Profile Image for Ali.
1,825 reviews174 followers
April 17, 2025
This is a mostly practical approach to writing about the current state of AI, with a focus on generative and predictive technologies. While pitched at those who are AI averse, the books' authors are self-described enthusiasts about the possibilities from regulated generative AI, while roundly condemning all forms of predictive AI, and noting the many abuses that can come from gen AI in the wrong hands. The book is particularly useful for the clear account of current fields of application (eg facial recognition, content moderation, sentencing or child removal predictions, creative endeavours such as image generation, coding, chatbots), and a handy chart with their assessments of both accuracy and harm potential of current fields of application. The authors have deep knowledge of these applications, and that shows in the detail and analysis of them.
In many cases, such as predictive uses and content moderation, they rank the tech as both inaccurate and harmful. In others, like facial recognition, it is harmful despite being relatively accurate. They are kinder towards the creative and assistive uses of generatie AI - I did think they skimmed over the issues around creator rights and how the technology has been trained on the work of creatives with no recompense to date. But I also feel that their acknowledgement that Gen AI can be used by a person to speed up certain kinds of work is undeniable. This leads to a strong set of recommendations for how to regulate, shape and control AI. They chuck some interesting extras in to this, including an argument for randomised ballots in university and other highly competitive selection processes, as a way to stop an inevitable arms race in which the wealthy and privileged put more and more resources into securing their spots at the top of the tree. These are sensible, but of course rely on having a government which is interested in protecting just and happiness-supporting outcomes - or having a population which can call a government to account for such things.
Profile Image for Anthony Moreau.
45 reviews
November 27, 2024
A timely and necessary book.

Should be mandatory reading for anyone in a position of power in the private or public sector: it's clear we should'nt allow any entity to use "AI" systems into decision making positions without a clear understanding of how (and if!) they work.

Overall a great discussion of the distinctive systems labelled as AI and a great antidote to the hype cycle we have been in for the last two years. Healthily skeptical, but with a touch of optimism as well.

The final two chapters alone were worth the price of entry!
Profile Image for Gorab.
857 reviews159 followers
April 18, 2025
3.75

Highlights: Title content with case studies. Eliminating bias.

Why it was picked?
Reco by @Manish over a casual AI related conversation. Thanks bhai and do keep the recos coming.

What's it about?
A beginner friendly reality check on AI. Considering it's a big umbrella term. No prior technical knowledge needed for this book.
Types of AI versus automation. Where it excels and falters. Augmentation vs Replacement. Why there can't be a yardstick to ascertain authoritativeness and authenticity of predictive AI with repeated scientific tests. Skepticism on its accuracy, ethics, regulation etc.

What I loved:
Thought process on debunking the myths.
Intro and overall structure.
Case studies, esp where AI has failed.

What i didn't like:
Much of the content on Generative AI and its risks was pretty common, with no takeaways. But it was needed here for completeness.

Overall:
Quintessential read in current times where almost everything is "AI powered". Equips you to think where and what to look under the hood for such claims.
Ensures that you don't buy AI Snake Oil (the scam, not the book!)
Profile Image for Trina.
1,355 reviews3 followers
September 10, 2024
Having read several books on AI in the last few months, this wasn't groundbreaking for me. I do think some of their approaches were different (certainly less doom and gloom than some) and I thought the final part where they imagined the world in two different ways depending on how we deal with AI was interesting. I am still waiting for a real explanation of why LLMs were able to train on data that can now be used in perpetuity without compensation to the original creators.
Profile Image for Timothy Grubbs.
1,590 reviews7 followers
May 27, 2025
The perils and potential of AI…and words of caution due to the numerous potential for fraud…

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor is a decent look at the modern use of AI and its history…while also calling out the abusers and shysters who only want to use AI to profit at the expense of others…

This book covers a wide range of AI issues covering generative AI (which “creates” material) and predictive AI (which can’t actually predict the future).

Both forms of AI have different concerns for the present day, though some of these overlap.

The writers don’t just limit themselves to the recent years as they also provide a dive into early research dating back to the mid 20th century…as a natural growth of early computer technology.

Throughout the chapters (broken up by subject), the writers also freely use examples of AI in pop culture or how certain fiction ideas compare to whatt people claim AI is capable of. It helps understand how something that is AI might pop up in the real world even if our brains do not necessarily think of it as AI (and there are even comments about what “counts” as AI).

Naturally much coverage is given to con artists and other “entrepreneurs” that market AI to solve all of society’s ills…often knowing full well they are full of crap…

Worth trying out if you have an interest in potential abuse of AI as well as how it’s been mishandled in the past…
Profile Image for Anika (Encyclopedia BritAnika).
1,626 reviews24 followers
November 13, 2025
This book does a fantastic job of explaining different types of AI, what they can and can't do, and what people pretend they can do and fail to do and grift people. Predictive AI can't predict anything, and is harming people because insurance companies are using it anyway. Some AI is bad. But not all AI. There's some we already use that is helpful but the new stuff is troubling. And I'm really glad I listened to this to help me understand better. Very easy to understand for a non-science/tech person like myself. Strong recommend.
Profile Image for Donald Schopflocher.
1,489 reviews36 followers
August 9, 2025
This book is less about what AI is, and more about how AI has been and should be evaluated. At present, ‘hype’ about AI, most often exaggerated or false, is rewarded, while serious analysis of its true capabilities is not. This is a very important message, and virtually the entire work is oriented towards delivering it.

The authors distinguish between predictive ai, generative ai, and content moderation ai. The ‘Predictive ai’ umbrella the authors use is very broad and I suspect they would think it covers almost every computer system that outputs a prediction. At the same time, an enormous amount of work has been done by statisticians and psychologists (especially those involved with testing) about prediction and about how to establish the validity of predictions. This is not well known by AI researchers, AI marketers, or indeed the authors themselves. It is therefore no surprise that AI hype, bordering on fraud, is used to sell predictive systems. While the authors do not discuss the specifics of how to evaluate predictive ai, as I wish they had, they do call for more support of scientific research to evaluate AI systems, and especially claims about their capabilities.

Long ago I was instrumental in creating a predictive system that is still in operation. Because I was a scientist knowledgeable about Test Theory, the system was periodically evaluated and re-evaluated against real world data, and continues to work quite well. It was based on statistical equations but since it was computerized, I would not be surprised if someone took this or a similar such system and claimed, inaccurately, that it was ai.

The authors note the intrinsic difficulties in performing content moderation with AI, noting especially that what should and should not be allowed on social media networks, for example, is dependent in the final instance on policies made and modified by human beings.

They are much more optimistic about generative ai, while not embracing it outright, and they present valuable strategies to employ when using generative ai, like chatGPT, to test and verify its accuracy.

Along the way, the authors discuss the dangers posed by ai, such as the Terminator scenario where ai tries to eliminate humankind, and whether or how to prevent it. They also discuss limits that current ai has that make these scenarios unlikely. One of these is that researchers do not understand in detail how ai works.

I also had a minor role in this line of research when, at my suggestion, a research group tried to understand how a simple neural network was actually solving a task by examining the ‘mechanisms under the hood’. We knew at the time that this would not be a popular line of research, but the authors’ call for renewed efforts to understand the precise ‘how’ of ai mechanisms was welcome.
Profile Image for Jens Hieber.
571 reviews8 followers
July 13, 2025
This was pretty much what I was expecting; not anti-AI but aware of the issues. True to the subtitle, the authors' goal is to talk about the limitations of AI, which requires them to know and present an accurate understanding of what the various forms of AI are and how they work and what they're capable of. I found this informative, though I expect those who are more familiar than I with the field would find this rudimentary. This functions as a good primer for how to spot AI hype, the issues with AI (and how in many cases it's not AI that's the issue, rather it's amplifying an already existing issue in a society), and some suggestions for AI research (which seems very flawed as it is) in particular.

I appreciated the lack of panic the authors brought, along with their expertise, clear writing, and copious examples (some of which I'd come across before). I felt that their suggestions in the final chapter were a bit vague and one of the visions they presented at the end felt a bit utopic.
Profile Image for Ari Damoulakis.
469 reviews30 followers
October 6, 2024
I am really not exaggerating but for me this is a very important book I hope all you my GR good friends will read so you will know to be careful and that many dangerous things could be done by humans who are not careful with AI.
Listen, I love AI.
As a totally blind person it has already done many amazing things for me and wonderful changes in my life, but even I definitely also know that it has problems when, for example, it tells me an object is something which it isn’t.
I will rely even more once I achieve my plan to be able to soon buy Envision Smart Glasses, which I am so super excited about.
But this book will also show you the terrible consequences AI could have for many humans, especially if other people use it wrongly, or maybe even deliberately skew models to take advantage of or defraud other people, or if AI is unintentionally misused because biases are accidentally built in or by mistake many factors aren’t taken into account.
Or if humans start relying on flawed AI and do not apply their own judgments to many situations.
And as for predictive AI? Well, ai makes mistakes now. We as humans are sometimes irrational and AI could create wrong futures even if it could predict haha.
Better humans live your own lives and let us hope we just don’t become cogs in decisions made by large companies who have too much faith in the future their AI might try predict is best for us.
And yes, I am still mad at facebook’s AI for refusing to let me comment on my friends’ posts with what we all know are stereotypical South African geographic jokes. You know, you can’t even use ‘I’ll kill you,’ in a sentence on comments to friends you’ve had over 20 years without the AI refusing to post it because it thinks I am issuing death threats or hate speech.
Profile Image for Michael.
371 reviews13 followers
November 17, 2024
Sadly this book was really bad. It’s just lots of classic “here’s how tech is bad”. It’s somewhat well summed up in the Ted Chiang quote towards the end that all fears about AI are actually fears about capitalism. These are not interesting or insightful. A book written in 2024 that uses studies from GPT-2 and doesn’t discuss hyperscaling or agents or LLama just isn’t contributing anything new to the conversation. There’s perhaps some utility is separating out generative and predictive AI (and for some reason adding social media moderation as a third thing?) but it totally fails to explain the parts that make this new era new. It’s not just ML 2.0. it’s that we have fundamentally new capabilities because of scaling and advances in the algorithms. It’s that we have better chips. It’s that we can eliminate many coordination and interoperability challenges by having computers interact more like humans. It’s that we can unlock the potential of smart but untrained people because most innovation is just pattern matching applied in new ways.

Sad because I was hoping for much more.
Profile Image for Hungry Rye.
456 reviews207 followers
September 18, 2025
Rated 3.5 stars

Very informative. Explains all of the aspects of Gen AI, predictive AI, and Augmented AI. I like how they authors debunked a lot of the myths surrounding AI while also discussing the negatives to AI as well as the positives.

Very easily digestible and think I great starting point for many folks wanting to learn more about AI as a software. Complements “Feeding the Machine” really nicely.
Profile Image for Ben De Bono.
521 reviews89 followers
October 16, 2025
Probably the most balanced and reasonable take on AI I’ve encountered (including my own). I especially love their point on how we talk about AI as though it’s a singular thing when it’s actually a great many

My one criticism is that the chapter on AGI didn’t quite address the potential issues as I understand them. They take on the “paper clip maximizer” problem and convincingly argue it couldn’t taken. The problem is no one is claiming it could. It’s a thought experiment, not a prediction. I want them to be right about AGI but they didn’t quite convince me to hand over my black pills
Profile Image for Becca.
84 reviews
September 5, 2024
This was a great read! Helped expand some of my ideas and understanding of AI. As well as temper some of the things hear floating around.
Profile Image for Leib Mitchell.
536 reviews11 followers
March 10, 2026
Book Review
AI Snake Oil
5/5 stars
"Much-needed cold water poured on AI hype."
(1037 words; 3m46s reading time)
---

**Stats:**

* 489 point citations / 290 pages; ≈1.7 citations per page. Well-sourced.
* 8 chapters (including introduction); ≈36 pages per chapter, a bit long for a single lunch break.

**Verdict:** Recommended. Overall, this is a brilliant book, particularly the final chapter (*"Where Do We Go From Here?"*).

**Short attention-span summary:**

1. AI is definitionally overloaded in the public's mind, oversold as a solution to many problems, and may be attempting predictions on things that are inherently unpredictable.
2. It's unlikely to be what everybody thinks it is, and it will take a lot of time and trial-and-error to determine how it can be best used.
3. Movies like *"Terminator"* are not good guides to the future. (Though academics who write books like this are probably only a little more in touch with reality than the characters in *Terminator*.)
4. We will have to wait and see what this technology becomes; commerce and industry will play a huge role in that outcome.

---

Reading this book was meant to be the opening salvo in my attempt to learn about and find sources of employment that are AI-proof—especially necessary after observing the upheaval in many job markets previously thought to be unassailable. (I have several sons I need to advise properly.)

If you are looking for extended discussions on the effects of AI on job markets and employment, you will not find them here. There are *some* at the very end of the book (p. 276).

Oddly enough, there is a whole chapter on "why AI can't fix social media," as if that helps anyone pay the mortgage.

There is a lot of otherwise interesting philosophical, semi-practical, and semi-technical information here.

Of course, the nitty-gritty details of AI are out of reach for all but a few people, so the authors make a point of speaking in plain words rather than jargon, hoping not to lose too much meaning along the way.

This book was *somewhat* easy to read. It introduced terms I didn’t know before, such as the difference between Generative AI and Predictive AI. Specifically, the authors argue that Predictive AI is inherently limited for epistemic reasons.

On the topic of definitions: the authors emphasize that the definition of AI is severely overloaded. It is not just one technology solving one problem, but a variety of technologies that differ widely in their mechanistic details.

For a book that is supposed to tell us quantitatively how well something works, there is not a single graph, table, or chart.

For example (pp. 40–41), the authors discuss how "the criminal justice system disproportionately burdens the poor and leads to cycles of poverty and racial inequality," and how "the number of people jailed has nearly doubled in the last four decades even though crime has decreased by 50%."

Let’s remember, this is in the context of how well AI can predict recidivism.

And yet, there is not a SINGLE number that addresses whether AI actually predicts recidivism better or worse than a parole board.

Moreover, what does the fact that many Black people end up in jail have to do with how well AI addresses law enforcement issues?

And so on.

There is also the well-known concept of "garbage in, garbage out." While everyone seems familiar with this, in the context of AI, it bears repeating.

There was some discussion about the way AI was built (I was surprised to learn that it took 80 years), but much of it was over my head.

The authors discuss misaligned incentives: for instance, if cancer research is sponsored by the tobacco industry, it may take decades to recognize that smoking causes cancer. (They note that three-quarters of AI PhDs go into industry.)

They also address poor reproducibility: if AI is trained on the same dataset it will predict, that doesn’t tell us how well it works on novel datasets. The authors call this "leakage."

**Vocabulary:**

* Data annotation
* Predictive AI
* Generative AI
* Content moderation AI

**Good quotes / factoids in the book:**

* (p. 47) "In response, candidates have developed strategies to work around opaque hiring AI. They stuff their resumes with keywords from the job application and add the names of top universities in white text (which a human reader can’t see, but a computer can recognize)."

* (p. 154) "As noted AI researcher Gary Marcus has argued, the problem of edge cases has proven fiendishly difficult in the quest for self-driving. There’s a long history of both researchers and car company CEOs being fooled by early tech demos and predicting that widespread self-driving is just around the corner."

* (p. 164) "Will any of the thousands of innovations currently being produced lead to the next step on the ladder of generality? We don’t know. Nor do we know how many more steps on the ladder there are."

* (p. 209) "There have been cases of police officers abusing content ID to try to evade accountability. When citizens filmed them… these cops started playing copyrighted music on speakers, anticipating that YouTube would block the video from being uploaded."

* (p. 232) "The Gartner hype cycle for new technologies is: 1. Technology trigger; 2. Peak of inflated expectations; 3. Trough of disillusionment; 4. Slope of enlightenment; 5. Plateau of productivity." (Graphical—picture worth a thousand words.)

* (p. 235) "Between 2021 and 2023, USD 50 billion was lost to crypto scams."

* (p. 242) "Social psychology has a reproducibility rate of about 36%, despite peer review. In a 2018 study, zero out of 400 papers satisfied the criteria for reproducibility."

* (p. 277) "Automation has lowered the cost of goods or services, leading to more demand for those goods. This is what happened with the introduction of ATMs in banks. The machines reduced the cost of running banks and ultimately led to an increase in the number of banks—and therefore bank tellers overall."

* (p. 281) "Ultimately, like much of the discussion in this book, labor exploitation and weak protections for workers did not begin with AI, and won’t end with it. AI is merely the latest flashpoint in a long history of automation."
Profile Image for Yaaresse.
2,161 reviews16 followers
February 13, 2025
12-2024 Need to park this one for a bit. Library wants it back and there is a waiting list. Pretty interesting so far, though.

Update: 2-2025
What is AI, how does the hype compare to the reality (so far), and what could possibly go wrong when we all go hog-wild over something that most of us really don't understand enough to know what it's doing? The authors attempt to cover the history (older than you think) of AI, how it is being used (and misused), and the strengths and weaknesses inherent in it. For the most part, they do a good job. They are decidedly not optimistic. After all, whenever we have a chance to use something for good or weaponize it in ways big and small, we tend to pick the latter.

Even with the bias (to which they admit), it's a good read for anyone either using or trying to avoid AI. Also a good read if you ever need to apply for a job, need a loan, need medical care, or are just trying to figure out why Netflix keeps suggesting movies you have zero interest in watching.
Profile Image for Freya Abbas.
Author 8 books16 followers
September 21, 2025
Very good introduction to predictive AI, generative AI, AI for social media and the differences between them. It explained how unreliable predictive AI can be, yet it is still used in the criminal justice system and medicine. Some people have a fear of AI taking over the world, but perhaps the issue of generative AI falling into the wrong hands is a bigger concern right now. I did not realize the amount of human labour that goes into content moderation and AI training (like labelling millions of images). Workers in these areas are exposed to disturbing content regularly and are severely overworked and underpaid. I thought this was a good introduction to learning about AI. Having no background in technology, there were some parts I struggled to understand but over all it was very enjoyable to read.
Profile Image for K..
4,853 reviews1,141 followers
April 20, 2025
Content warnings: child abuse, racism, mentions of sexual assault, mentions of eating disorders

A truly fascinating (and horrifying) insight into AI and what it can and can't do.

A quote that's stuck with me: "Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it's always convincing, so it's hard to tell the difference."
Profile Image for Rachel Pollock.
Author 11 books84 followers
April 14, 2025
I’m doing a deep dive on reading books about AI, and this one is fantastic for explaining the different kinds of technology referred to as AI, how it works and was created, what it can do and what it can’t. Highly recommend.
Profile Image for Cav.
916 reviews217 followers
April 15, 2026
"This book is a guide to identifying AI snake oil and AI hype. In it, we’ll give you essential vocabulary to tease apart generative AI, predictive AI, and other types of AI..."

I found AI Snake Oil to be a middle-of-the-road look into the topic. AI and talk of AI is everywhere these days. From academia to Wall St, there is hardly any facet of life that is untouched by this emerging technology.

Co-authors Arvind Narayanan and Sayash Kapoor are Princeton computer scientists known for their critical analysis of artificial intelligence. They have the credentials to call out the industry's more questionable claims, even if the delivery here is a bit dry.

Arvind Narayanan & Sayash Kapoor:
download

The book opens with a decent intro. The authors talk about the beginning of ChatGPT. Unfortunately, the intro was the high water mark for me. The rest of the writing felt a bit long-winded. There is a fair amount of fluff and assorted minutiae that slows the pace down and takes away from the bigger picture.

The authors drop the quote above near the start of the book, and it continues:
"...We’ll share commonsense ways of assessing whether or not a purported advance is plausible. This will make you read news about AI much more skeptically and with an eye toward details that often get buried. A deeper understanding of AI will both satisfy your scientific curiosity and translate into practical ideas on how to use—and when not to use—AI in your life and career. And we will make the argument that predictive AI not only does not work today but will likely never work, because of the inherent difficulties in predicting human behavior. Finally, we hope that this book will get you thinking about your own responsibilities—and opportunities for change—with respect to the harmful implications of these tools."

In this short blurb, they define "AI Snake Oil":
"AI snake oil is AI that does not and cannot work as advertised. Since AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. This is a major societal problem: we need to be able to separate the wheat from the chaff if we are to make full use of what AI has to offer while protecting ourselves from its possible harms, harms which in many cases are already occurring."

The book also talks about:
• Generative AI vs. Predictive AI: Teasing apart what actually functions versus what is sold as a pipe dream. The authors argue that while Large Language Models have clear utility, using AI to predict complex social outcomes (like job performance) is fundamentally flawed.
• The "Reproducibility Crisis" in AI: How many academic claims about AI "breakthroughs" are actually the result of poor methodology rather than genuine intelligence.
• Artificial Intelligence vs. Statistical Modeling: Cutting through the branding to show that much of what is sold as "AI" is essentially glorified regression.
• Ladder of Generality: A framework for assessing the scope of AI applications and understanding the limitations of narrow systems.
Screenshot-2026-04-07-124944

********************

Ultimately, I am admittedly picky about how engaging a book is, and I found the delivery here to be a bit sluggish. While the science-backed distinctions are useful, the writing is a little too verbose to make it a standout.
3 stars.
Profile Image for Julian Dunn.
389 reviews23 followers
August 20, 2025
A book with the title AI Snake Oil has some pretty big shoes to fill, owing largely to the use of the phrase "snake oil", which implies that most or all of what gets marketed as AI has absolutely no value. And there is definitely a lot of hype and false/exaggerated claims being thrown around in 2025, with almost no checks on veracity -- not from the media, largely, and certainly not from regulators (the US generally is in a free-for-all, unregulated environment now with the re-election of Trump). The two authors are academics and at times sound like they are writing a textbook (which they do suggest this book can be used as such in academic settings, so that's not entirely off-base) which means they make a few odd choices about pacing and the taxonomy of the world of AI. Specifically, they create three categories of AI: predictive AI (machine learning / regression models), generative AI (language models), and social media content moderation, and devote a chapter to each of these areas, trying to debunk much of the hype and unsupported claims in each area. I agree that the first two are distinct categories, but to me, social media content moderation is just a use case for one or both of the foregoing categories' approaches, so I don't know why it gets a special call-out at the same level of taxonomy as the others.

The authors' point of view is that predictive use cases are far more dangerous today because black-box models that aren't regulated nor peer-reviewed are often employed in decision systems that can truly have life-changing effects on humans' lives. This lack of transparency has allowed for some of the most egregious outcomes that have been covered elsewhere, such as Epic Systems' sepsis prediction tool (performs only slightly better than chance), ShotSpotter (largely ineffective), HireVue (also largely ineffective), and predicting the chance of civil war (completely ineffective). It's clear that Narayanan and Kapoor's work has been focused in this area for a long time, because they make a compelling case that when lives/futures are literally at stake, corporations who employ this kind of technology have a responsibility to take much more care in ensuring these algorithms actually work and aren't impacted by biases such as confirmation bias, teaching to the test (not separating the training data set from the evaluation data set), etc. Predictive systems are mathematically the easiest to understand and could show promise in many use cases as long as biases could be accounted for, rigor is applied to the development of these systems, and their forecasts are evaluated by an outside party before products are brought to market and claims made. (Again, good luck pulling this off in a deregulatory environment, but that's a matter of will, not skill.)

The book gets a little weaker once the authors start addressing generative AI. Because I suspect it's not their area of expertise, even they get hooked a little by the snake oil. In a few places, they uncritically parrot some of the promises of the language model boosters (e.g. "Developers can prevent inappropriate outputs by ensuring that when training chatbots, the bots are given examples of the kinds of things they are and aren’t allowed to say" -- which is categorically not true, they can't "prevent" that). Although they do recognize that LMs are essentially fancy synthetic artifact creation machines, Narayanan and Kapoor deliberately and explicitly ignore the side-effects upon human creativity that the explosion in their use might create. These authors are ultimately computer scientists, and one of the areas of trouble that tech folks get into when evaluating whether technology is good or bad is when they focus only on the innovation itself and not its peripheral effects. They fall into this trap, seeing "philosophizing" about LMs to be out-of-scope, whereas I think if we are to have a robust debate about the future of these tools, those considerations must be included. However, they do a nice job of demystifying "deep neural nets" and "deep learning" for the average layperson, though, showing that these techniques are not magic, despite all of the anthropomorphism in the use of language like "learning" that might suggest somehow LMs are sentient beings.

Turning to social media content moderation: this section of the book is kind of meh. The authors make a good point that trying to apply computer algorithms to unbounded domains, particularly when inputs arise from social systems (surprise, humans are unpredictable!), is a fool's errand, because whether something is "good" or "bad" depends so much on an extremely broad context, one that is changing all of the time. Even humans themselves would have trouble keeping up, particularly if those humans aren't familiar at all with the context (which is why Facebook, without a local presence in Burma, allowed the massacre of Rohingya Muslims to be carried out). While this is a valuable topic to discuss, I think that if they were to bring it up, they really should have more strongly opened the can of worms about whether the incentives of the social media products as currently constructed are the right ones to reduce toxic content. Because I guarantee that platforms whose revenue streams depend on eyeballs and clicks will naturally prioritize the delivery of outrage-inducing, divisive content, and we should definitely be focusing on that as the root of the problem rather than debating whether or not "moderation" works in the broad. (It doesn't.)

Overall, the book is not horrible; it's less of a screed than some of the other books on AI that I've read, but even the "choice of two futures" presented in the conclusion didn't sit quite right with me. Neither of the futures they presented were compelling (one of which was obviously more nauseating than the other) but the authors didn't really go far enough to illustrate a world where we don't simply have to accept the existence of such products just because big tech companies are creating them. Two futures, one of which is "absolutely no guardrails" versus "a few minimal guardrails" is not really a choice -- just shades of the same choice. Again, I think this is a function of the authors being computer scientists with minimal background in philosophy and the humanities, so they start from a position of optimism about technology (in a microcosm) rather than skepticism about its system-level effects.
Profile Image for Bo Wang.
56 reviews1 follower
April 3, 2026
I just finished AI Snake Oil, and it’s one of the rare books on AI that left me genuinely thinking long after I closed it.

What stayed with me most was the ending. The authors contrast two girls growing up in very different AI-shaped worlds. One reflects today’s reality in the U.S.—where AI is often injected everywhere by default, with little room for questioning its limits. The other, Maya, lives in a future where adults take responsibility for governing and regulating AI, and children are encouraged to develop critical thinking, creativity, and artistic exploration on their own terms. In her world, AI is a tool she turns to when it supports her needs—not something that replaces judgment, effort, or learning.

That contrast felt uncomfortably close to home.

The book is also refreshingly blunt about AI’s promises. It critically examines claims of accuracy, prediction, and contextual understanding, and shows—again and again—how these promises don’t hold up in reality. In many cases, AI predictive models perform no better than a 50/50 coin flip. Reading this stripped away a lot of the mystique and reminded me how dangerous it is to over-trust systems that are fundamentally probabilistic but marketed as authoritative.

For me, AI Snake Oil reinforced a responsibility, not a fear. If AI is going to be part of our lives—and our children’s lives—we have to be intentional about how it’s used, where it’s limited, and who it ultimately serves. I want AI to support human agency, social understanding, and creativity—not quietly erode them.

This book doesn’t argue against AI. It argues for humility, governance, and human judgment.
Profile Image for Chris Boutté.
Author 8 books289 followers
April 22, 2026
There are a lot of books coming out about AI, and most of them lean way too far in one of the extreme directions. They either overhype AI like it can save the world, or they say AI is going to destroy the book. Fortunately, this isn’t one of those books, and it’s a really good, honest and balanced take on AI.

The authors are two researchers, and they do a great job discussing what AI is, what AI isn’t, and what we should and shouldn’t be concerned about. They write in a way anyone can grasp, and they talk about some of the positive aspects of AI, but then they also write about how a lot of shady people are making tons of money lying about AI's capabilities.

I think this book should be required reading for anyone who cares about the AI conversation. It’ll help you have a more realistic view of where we’re at with the technology and where we’re going. It can also help you make better decisions when it comes to using this technology, that’s nowhere near where the hype says we’re currently at.
Profile Image for Carrie.
2,746 reviews61 followers
November 19, 2025

This is a great book for people who don’t know a lot about AI, even if they use it in their daily lives. It covers the various types of AI (it’s not all the same!), the shortcomings, failures, environmental implications, and advantages of using it for tasks that would take the human brain much longer to complete.

I was fortunate enough to be part of a bookclub where really smart librarians and technologists discussed the ramifications in schools. Across the board, everyone agreed that we should be teaching kids about the practical and ethical costs of relying on AI. One major thing that I took away was that AI is not smart at thinking, it’s just smart at regurgitating, so if the data is faulty and people coding the AI biased, then the AI will just spit back out those flaws at an amplified rate.

“We should be far more concerned about what people will do with AI than with what AI will do on its own.” (171)
Profile Image for Panz.
187 reviews
May 15, 2025
Took my time in reading through this one. This book is well researched given that this field is rapidly evolving. The points feel a little repetitive, though it serves to drive home the authors' positions in different categories of AI. AI optimists will argue that the AI we see and use can only be the worse that they will ever be, but that is besides the point. The authors are highlighting systemic and psychological flaws in our relationship with this particular form of technology, which are orthogonal to the development of said technology. This book is a must read for the layman who wants a critical analysis beyond the media hype cycle.
306 reviews1 follower
February 6, 2026
Excellent book disecting Artificial Intelligence (AI). It helped me understand this amazing new technology but also pointed out the weaknesses, vulnerabilities and fallacies revolving around it. Specifically, generative AI, defined as AI that creates new and original content (text, images, code, music, and videos) by learning patterns from existing, large-scale data sets, is mature and very powerful. It is a huge efficiency aid for many but can be a threat to the livlihood of some individuals. But it is generally a net positive to society. However, predicitive AI, defined as AI that uses historical data, statistical algorithms, and machine learning to forecast future outcomes, behaviors, or trends, is is often highly inaccurate with uncalibrated models. In fact the advertized verification methods are often rigged and misleading.. It is used extensively in policing, hiring, and medicine, and often results in significant and life altering consequences. These tools should not be used without independent validation and verification, which is almost never performed due to corporate secrecy concerns In general these models are harmful and should not be used to the extend that they are. The book also presented a lengthly discussion on social media content moderation and the difficulty of policing content in general. The final key feature of this book is to dismiss the fear that AI will displace humans and take over the world - this is not possible. Overall a very informative read. Super smart and thoughtful authors. I am very glad I read it, as I have been curious about but also under-informed regarding AI.
Displaying 1 - 30 of 377 reviews