'The blizzard of excitement, misinformation and pure hype around AI has driven many of us to want an honest guide. If, like me, you’re one of those many, you need to read this book' BRIAN ENO
Is AI going to take over the world? Have scientists created an artificial lifeform that can think on its own? Is it going to replace all our jobs? Are we about to enter an age where computers are better than humans at everything?
The answers to these questions, as the expert authors of The AI Con make clear, are 'no', 'they wish', 'LOL', and 'definitely not'. In fact, these fears are all symptoms of the hype being used by tech corporations to justify data theft, motivate surveillance capitalism, and devalue human creativity.
Packed with real-world examples, pithy arguments and expert insights, The AI Con arms you to spot AI hype in all its guises, expose the exploitation and power-grabs it aims to hide, and push back against it at work and in your daily life.
‘A book to inoculate your mind against Big Tech’s AI utopian hype’ Yanis Varoufakis
‘Hanna and Bender provide the clearest picture yet of what AI is, what it is not, and why none of us need to accept it’ Timnit Gebru
‘A powerful antidote. The authors show that these technologies will serve to deepen existing inequalities, and further 'enshittify' life and work for the vast majority of people’ Grace Blakeley
‘Truly eye-opening. An indispensable “field manual” for those who want to fight for a more humane economy and a better society’ Ha-Joon Chang
Having read around a bit more, I wanted to recommend these three books as alternatives that worked better for me:
1. Why We Fear AI, by Ingeborg Glimmer and Hagen Blix: For a good analysis of AI and power, that goes into the way ideology is produced in and through AI, and the way that AI is about power in politics, and in the workplace: https://www.goodreads.com/book/show/2...
2. More Everything Forever, by Adam Becker For the ideology of Silicon Valley, the way that the tech vision of the future is about accumulating more power, and also the bizarre world of little cults around big tech: https://www.goodreads.com/book/show/2...
I really wanted to like this book. I think the authors mean well, and they touch on a lot of important issues. I think this book could and should have been an important contribution.
Unfortunately, the authors let their desire to deploy "ridicule as praxis" run wild, to the point that poking fun at things was obviously more important than any kind of consistency. The book constantly contradicts itself. I honest to God could not tell whether the authors ultimately believe that AI will replace jobs, or not. Two consecutive sentences will happily tell the readers that "Goldman Sachs is saying the quiet part aloud here: we found a way to save a boatload of money by replacing you." and that "This promise of automated replacement is not new, but rather a persistent myth". So, which one is it, the quite part out loud or a persistent myth? It seems to me that it should be one or the other, I find it hart to fathom how it could be both. This kind of writing made me continually feel like the authors are not going to let any kind of consistency get in the way of a good jab. Ultimately, I couldn't even tell if the authors believe that productivity increases can exist in principle, and/or have happened in the past (I suspect that they do not believe that productivity increases are real, but I found it impossible to actually tell from their writing).
I find this particularly depressing because the authors do at times land on real insights. The book is full of little anecdotes and stories that really could shed light on many issues. They include a mostly excellent discussion of the relation between "intelligence" and eugenics (but then they tell us that the eugenicists are in it for the money, and also that actually, they are the ones who are spending the money, and also that actually the money is fake. If there was a consistent set of claims, it passed me by and left me utterly confused). They show us that sometimes something that is sold as a productivity increase is really just a hidden labor force in what they call the "Majority World" Excellent, I want to learn more, but the authors won't stop for a discussion of how we know what's productivity, and what is merely the digital version of offshoring, so we never know what is what. Surely, not all of AI is just this? But we never learn how the specific instances relate to the general theme, and so it can feel like the authors themselves can't quite get out of that hype bubble where all AI is all the same, even though they tell us quite clearly in other places that AI is not at all a unified thing.
The book is also constantly and loudly explaining things that the authors themselves clearly know nearly near nothing about. They seem to think that "auto barons" were a thing "at the beginning of the Industrial Revolution" (wrong century?). They tell us that power looms were water frames (two entirely different technologies for weaving and spinning). They confidently suggest that the destruction of single $200,000 Jaguar "may have been one of the most expensive acts of rage against the machine in recent memory, possibly since the early days of Luddite frame-breaking" (a rather severe underestimation of the history of sabotage and machine breaking in labor conflicts). The text is so chockful of such confident falsehoods and misunderstandings that it did at times make me wonder if a language model may have produced the text, and this was an elaborate trolling gotcha.
Then, there is the strange advice. You may be tempted to use ChatGPT, but you should not! Why? Well, because what if they raise the price someday, and also you should never have used google either, because Cory Doctorow said something about enshitification, and also did you know that it once replicated a piece of code for "fast inverse square root" from the wikipedia entry with that name, including the comments (which included cursing!) and all. Again, the desire to ridicule is just so obviously overbearing. I don't find this useful, nor relatable. It certainly doesn't help me to more systematically think about AI.
At best, I think, the book offers "ridicule as comfort", if you can feel like you're in on the joke as a reader. As "praxis", I think it is a failure, at least if you were hoping to get a better sense of what AI is, how the hype works, and how it's actually going to impact the world.
I really have no idea how much of AI is hype and how much is reality. With the bubble looking like it is about to burst, it seems there is more than enough hype involved for us all to be worried. This book is helpful since it explains not only what is said about it that is likely to be hype – that AI is about to take over the world, that it might end the world, that you won’t have a job soon, that it is conscious and intelligent and creative, and that those opposed to it are on the wrong side of history – by explaining what it actually is that AI does. That is, mostly uses large language models to predict the next most likely word to insert in a sentence and scrapes the internet for content it can essentially plagiarise. None of which would necessarily be in my top five go to definitions of ‘intelligence’.
The harm is clear for all to see, the benefits always seem to be a couple of years away. I’m not the right person to judge, since I’m still to use ChatGPT or any of the other systems I’m constantly told I shouldn’t be able to live without.
The authors make a lovely point in this that those who boost the benefits of AI and those who see it as a potential source of our doom aren’t actually on opposite sides in this debate – but both on the same side of what they believe is hyperbole. That the whole thing is more or less a scam for the very wealthy to become even more wealthy by deploying a mode of theft as a business model.
One of the more telling examples was the proposed use of AI to write scripts for films. Sounds reasonable enough – except everyone knows AI can’t do that in a way that would be watchable. So, then writers would be called in to edit or clean-up the script – but since they were just ‘editing’ the script (while actually rewriting the entire thing) they would be paid as editors, rather than writers. The word ‘scam’ comes to mind.
A few years ago, I read a book called The Scout Mindset. It was the first time I’d come face-to-face with rational or effective altruism. I thought the book was mostly crap, and made the mistake of saying so in my review. A large part of the point of the book was to encourage readers to be open to other people’s opinions. Christ, did it fail on that count. I was abused by a string of followers who then went on to delete their comments – how terribly rational of them. I honestly wasn’t prepared for the vitriol coming from these rational guardians of enlightenment traditions. This book places such people in the same camp with eugenicists. The thought, ‘I should have known’ was the first thing that struck me.
As they say about effective altruism – it is just another form of white supremacy where rich people decide what is best for black and brown people without bothering to ask or involve those living the life of disadvantage any say in their own liberation. No surprises there.
The authors recommend ridicule as a solution to AI overreach. You know, when you see an AI image with 7 fingers, or read yet another shitty student essay that creates nonexistent references, pointing and laughing is probably the best solution. But as they remind us, and as is the way of the world, AI will be used to educate the poor, to provide medical treatment to the poor, to provide psychiatric care to the poor, to provide legal advice to the poor, while the rich will continue to use real humans. It’s all too much like that executive at Campbell’s soup who said, “Who buys our shit? I don’t buy Campbell’s products barely any more. It’s not healthy now that I know what the fuck’s in it … bioengineered meat. I don’t wanna eat a piece of chicken that came from a 3D printer.” The more things change, the more they stay the same.
This is a very bad book, and since the authors are constantly reminding you how politically correct and sensitive they are, I wonder how often people tell them what quickly occurred to me: in many ways, it's a lot like racist and sexist propaganda. The interesting difference, though, is that it isn't directed towards Blacks and women. It's directed towards AIs.
Everyone who's spent any time looking seriously at propaganda knows the fundamental rule: it's more effective if you mix in a reasonable amount of truth with all the lies. This idea has been around for a long time, and Linebarger's classic Psychological Warfare traces it back to the Bolsheviks in the early 20th century; maybe further. For example (and I apologise in advance to my Black friends, but it's necessary to make this point strongly), if you're spreading racist propaganda then it's effective to start by saying that Black people on average have lower IQs than white people. You can look it up in the literature, there are plenty of studies showing that it's true. Of course, you won't add that the same studies often also say why it's true: the fact that Black IQs in the US rose sharply during the 20th century shows that the explanation almost certainly isn't that Black people are biologically inferior, it's that they tend to be poorer and receive worse schooling. But you hope that the people listening to you won't check the literature, and indeed most of them won't.
So, let's get back to Bender and Hanna. Here, the true part is that AI, particularly Generative AI (GenAI) is a new technology which shows great promise. Historically, whenever a promising new technology turns up, it rapidly attracts a great many unscrupulous grifters, liars and frauds who see the chance of making a lot of money out of it. AI is exceptionally promising, so the grifters, liars and frauds have an exceptionally large opportunity. Some of them, and I'm sure everyone will already be mentally filling in names, are making truly colossal amounts of money by overselling, overhyping, overpromising and generally lying about the new technology. Bender and Hanna list many such cases, and here I more often than not agree with them. If you think all the techbros and venture capitalists are just telling the plain, unvarnished truth then you are a remarkably gullible person.
However, they go much further, and here I emphatically do not agree with them: because many people are presenting AI products fraudulently in order to make huge profits, they conclude that AI is itself a gigantic fraud. This is their version of the rhetorical move which transforms Black people's lower IQs, in certain specific tests, into the generic inferiority of Black people. They do all the things racists and sexists like to do: they misrepresent, cherry-pick, pretend that anecdotes are conclusive evidence and ignore any inconvenient facts that might refute their arguments, no matter how well documented or relevant these facts might be. Most startlingly, on almost every page they apply verbal tropes to AIs which, if similar tropes were applied to humans, would immediately be called hate speech. Calling a Large Language Model a "stochastic parrot" or a "text extrusion machine" isn't a scientific argument. It's just hate speech directed towards an unfamiliar target.
Perhaps they would reply that the LLM isn't conscious and so it doesn't matter, but this is not a good objection. At the very least, LLMs behave in many ways as though they were conscious; if you get used to directing hate speech towards them, you may later find yourself doing the same thing with real people. This line of reasoning is presented in much more detail in David Gunkel's Robot Rights. If you're unconvinced, let me give you a little thought experiment adapted from Gunkel's book. Suppose that someone were to start marketing hyperrealistic sex dolls, which could talk, scream, cry, bleed etc in a convincing way, and which could be "raped" and "killed" for a suitable fee. (For all I know, they may exist already; I'm sure there's a market). How would you feel about this? Would you say it's fine, because the dolls aren't conscious? Or would you rather sign a petition asking for them not to be made available in your town? Well: what Bender and Hanna are doing is related, it just isn't so in-your-face horrible.
To take a non-speculative and real-world example where I am thoroughly acquainted with the facts, I was curious to see how they would explain away the steady progress AIs made in chess between 1948 (Turing's toy chess program, which couldn't even play on a full 8x8 board) and 2017 (AlphaZero, which taught itself chess from scratch in two days to become the strongest player of all time). This story is documented in thousands of papers; chess engines much better than the world champion are now readily available online; many smart people are on record as saying that even a Grandmaster level chess program would be conclusive proof of human-level intelligence. When I finally got to page 153 (it appears very late), I was for a moment astonished by the strategy they used: simple flat-out denialism, they just disagreed with the claim that chess and intelligence had anything to do with each other and changed the subject. Truly a debating move worthy of Donald Trump.
If it were only chess, of course I wouldn't care, but it is not the exception but the rule. I am struggling to think of any example they give where they reluctantly say: alright, some other kinds of AI are terrible, but this one is in fact pretty good. They are as forgiving towards AIs as a Klansman is towards Blacks. Perhaps most importantly, to people who know the GenAI literature, their characterisation of modern GenAI is all wrong. They either don't know about the more recent developments, which have transformed the subject, or they choose to ignore them; no mention of Chain-of-Thought reasoning, no mention of RAG. They describe GenAI as it was perhaps two years ago, if that; RLHF training, crucial even in ChatGPT-3.5, is presented in an incorrect and misleading way, basically that it just filters out offensive content. I can't tell if they are ignorant or actively lying, but whatever it is it's unforgivable. They constantly talk about the importance of peer review; did anyone peer review this book? If so, they did a very poor job.
I am seriously concerned. Over the last few years, it's become regrettably common to see people on the right politicising all kinds of things that obviously shouldn't be politicised and spreading ridiculous lies and conspiracy theories. It still astonishes me that so many people could believe in QAnon, but they did. If this is intended as retaliation, because some Silicon Valley movers and shakers are too friendly with the Trump administration, it's the wrong strategy. The claim that AI is all a con is as incoherent as anything you'll hear from Candace Owens. Of course there isn't a gigantic, world-wide conspiracy to make it look as though a technology which doesn't work achieves incredible results. Hinton and Hassabis recently received Nobel Prizes for their work on AI. Does that mean the Nobel Prize committee is also part of the conspiracy?
In conclusion, don't imagine I'm saying that you shouldn't read The AI Con. I strongly believe in free speech; even more, I believe in Don Corleone's dictum that you should keep your friends close and your enemies closer. But I do advise you to think carefully about whether the authors are your friends or your enemies. When I wonder who would like their book, the first person who comes to mind is Elon Musk: yay, proof positive that his critics are just as deluded as he always said they were.
In July 2023, Congressman Ro Rhanna used a premium-grade ChatGPT subscription, provided by the US House of Representatives, to generate the draft of a bill (H.R. 4793) named the Streamlining Effective Access and Retrieval of Content Help Act. In the "findings" section, the bill explains that "the use of the latest available technology can significantly enhance website search capabilities, enabling faster and more accurate retrieval of information." However, as Emily Bender and Alex Hanna argue in their book, this wishful statement is logically bamboozling: it's false at the present moment and utterly unfalsifiable for the future. Right now, new technologies are not faster and more accurate. In fact, ChatGPT and Gemini are a poor replacement for current search engines, frequently misrepresenting websites, merging reliable and unreliable information from different internet sources, and even generating bogus quotations and citations. As a general statement about the future (i.e. that the most recent technology will always result in more reliable information), this is completely unverifiable and presupposes the reader's trust in the predestined direction of digital research. As a rule, though, the latest technology is often the least tested and least trustworthy.
H.R. 4793 is an example that perfectly captures the book's critique of our current AI landscape: politicians, venture capitalists, educators, business owners, the general public have all fallen victim to hype. Here we have a congressman using AI to generate a law whose explicit textual justification is premised on blind faith in the timeless reliability of new technologies. We, the public, have been so inculcated in the creed of progress and innovation that we assume that all the new technological developments are miraculous wonders that will benefit society. However, inspect the language closely, you will see the false promises and empty rhetoric: of course it is, on some level, true that "the latest available technology can enhance website searches, enabling more accurate searches", but note how amorphous the phrase "latest technology" is, note how weaselly the words "can" and "enabling" are, and note how subjective "more accurate" is. Even AI-generated hype is vague and hedges. Throughout the book, Bender and Hanna draw attention to the many times in which AI hype has been exuberantly proclaimed and then fallen short of reality. AI is, above all else, a marketing term to describe numerous different technologies that will have, and have had, deleterious consequences for the labor market, the environment, education, health care, and policing. It improves little in society but it does enrich its investors.
Overall, I think this is a strong book that delivers a powerfully worded Jeremiad against all the false hype about AI. It's not just the investors who over-sell the capabilities of AI and minimize its shortcomings. Even the "doomers" are complicit: by warning about a potential robot apocalypse or technological singularity, these critics of AI also exaggerate and over-dramatize what AI is and what it can do. They drum up interest and inspire awe in a technology, and by using such sermonizing hyperbole, they actually distract away from the very real technical failings and biases in AI. Call it automation, call it a "text-extruding machine", call it a "stochastic parrot", and then you have a better insight insight into the pitfalls of the technology. The problem with Large Language Models is less an issue of robots developing consciousness and manipulating humanity but a more insidious crisis of synthetic text simulating humans and saturating our digital eco-system with unreliable, dubious and biased content—making it harder to know what on the web is true and who is real.
Reading this book and the way it describes LLMs as "text-extruding machines", I was reminded of Borges' story "Pierre Menard, Author of the Quixote". In his story, Borges imagines a 20th-century French author transcribing the words of Don Quixote who creates a verbatim transcript but, paradoxically, this transcript is a wholly new text. While the words are identical, it's a different century with a different literary context and so Menard's version is actually more subtle than Cervantes', more historical and less fantastical, more ironic, even surreal, and intertextually richer (with connections to Paul Valery and Friedrich Nietzsche). So it is with ChatGPT. A human sentence and an automated sentence are not the same thing. Spoken by a human, with a particular intention, a human sentence will have a particular meaning and contextual resonance; automated by a machine applying certain statistical weights and perceptual patterns, the same exact sentence is nothing more than a probabilistic representation of pre-existing texts matching a particular input. There is no meaning. While Bender and Hannah have many policy concerns about AI, I found their book was most compelling when it suggested a particular ethic for reading AI, cautioning readers against confusing AI text with meaningful prose: "Mistaking our own ability to make sense of text output by computer for thinking, understanding, or feeling on the part of the computer is dangerous. At the level of an individual interaction, if we don't keep attention on who is doing the meaning making (us, human communicators, only), we risk being misled by system output, trusting unreliable information, and possibly spreading it." AI is dangerous because it exploits, and erodes, our natural empathy and trust in the written word.
This is not a technical book and it doesn't really describe in much detail the under-the-hood mechanics of large language models. It feels in part like a catalogue of all the recent failings of AI and a take-down of many press releases trying to amp up excitement for AI. I wasn't sure if it needed to be a book (and there is something a little silly and antiquated about a book responding to a technology that is changing in real time). But it offers useful ways for thinking and talking critically about different AI technologies.
A book that promises to debunk the hype surrounding Artificial Intelligence, but ends up falling into another extreme: total denial. Despite the importance of an ethical and informed critique of the dynamics of big tech, Bender and Hanna opt for a moralistic denunciation strategy, ignoring contexts of use, technical advances and the complexity of the cultural phenomenon underway. One is left with the feeling of reading a sermon, not an analysis.
Authors severely underestimate the power and influence and coming pervasiveness of the use of AI. They highly underestimate the use of AI in work and school. They cite a survey of students use of AI with less than 20 percent of students using AI. (!) that should be a red flag about their questionable sources and lack awareness of the pervasiveness of llm/AI use in education. They severely underestimate the ability of AI to replace low end knowledge workers ( para legals, coders, medical technicians,).The authors deem all AI generated art as theft but they fail to realize that all artists steal from other artists. Authors recount the impact of tech and social media on legacy media, a current echo of the Luddite movement. A lot of concern is given to reinforcing bias and the misuse to Ai in legal, hiring and professional areas. There is a spectacular lack of imagination as the increased power that gives AI to already entrenched institutions, will be used in new ways that are hard to fathom. the authors also come up short on prescriptions to the harms of AI: just don't use it (seriously?) and make fun of it. Authors are very short sighted to call AI a hype bubble when it threatens to change everything in the near future. I am stupider for having read this book. Zero stars. (Better read, Scary Smart by Mo Gawdat and Empire of AI for thorough examination of the dangers and causalities of content moderators.)
I have SUCH mixed feelings about this book. I think the authors are a hundred percent right that AI isn't going to take over the world. It's a machine, programmed with (admittedly complicated) code. And it has had negative effects on society. Namely, enabling students to cheat and not learn the valuable critical skills you learn when you write an essay yourself, and destroying the environment. Do you know how much water those data centers use to cool the computers that power AI? It's a lot!
But the authors' point is often undermined by their extreme leftist positions. They refuse to use the term "Global South" and instead use "Majority World," which I hate. Black, when referring to black people, is ALWAYS spelt with a capital B. In fact, they are so wedded to this that if they quote from a source in which black isn't spelled with a capital B, they actually use a sic after it! Obscene!
This is a worthwhile read to learn about AI hype, but you may be left gagging on the extreme leftism.
The AI Con has an agenda that it gives away in the title: AI is a con. It’s a bubble that’s going to burst. And the most interesting thing about it is what the remains will turn out to be.
It’s a very snarkastic book. There were times when I wasn’t sure what the authors were trying to tell me, but I sure as hell knew they felt it was very smart. The interesting part was that they fall neither into the camp of Boomers (AI is going to solve all our problems and create paradise!) nor Doomers (AI is going to turn us all into paperclips!) – their starting point is that AI is a con in the first place, and is inevitably going to be exposed for what it is. (I sort of agree with this.)
What made me somewhat sad was the authors’ advice as to how we can achieve this quicker: ridicule the poor effects, amplify the mistakes, and do not use it. Right. The Luddites’ example is repeated throughout the book. We should be AI Luddites. But the Luddites lost.
My ratings: 5* = this book changed my life 4* = very good 3* = good 2* = I should have DNFed 1* = actively hostile towards the reader*
This book is a takedown of AI hype. Rather than accepting tech industry claims about artificial intelligence, Bender and Hanna expose "AI" as primarily a marketing term masking corporate power grabs, data exploitation, and the devaluation of human work. I have been reading extensively about AI and its current state, attempting to separate rather outlandish claims and concerns from what is really happening, so when I saw this title, I felt it would add to my understanding of the bigger picture.
The authors argue that "AI” is a marketing term being used to rebrand existing machine learning technologies. They debunk common claims about AI taking over the world, creating artificial life, or becoming better than humans at everything. They employ wit and sarcasm, which assists in making their points and makes the book appealing even to those without a keen interest in technical topics.
The authors argue that large language models are systems that can generate coherent language but do not (and cannot) understand the meaning behind the language they extrude. Bender and Hanna contend that the AI technologies are primarily intended to help the rich get richer by “justifying data theft, motivating surveillance capitalism, and devaluing human creativity.” They believe it is the latest trend in Big Tech's drive for profit, with little concern for its impact.
The book examines the current state of AI in industries such as healthcare, education, media, and law-enforcement. It provides examples of products already in place that are unreliable, ineffective, and even dangerous. It also looks at the environmental impact of these technologies, which are causing tech companies to miss their carbon reduction goals.
The authors provide mechanisms to resist the imposition of AI and suggest, where possible, that refusal to use it is one of the main ways to push back. The book encourages readers to think critically about the broader social implications. This book is essential for anyone seeking to understand the current AI landscape. I have read plenty of books promoting the benefits of AI. This book provides the other side of the coin.
Points out the hype and boosterism rampant around the tech sector generally and AI specifically, buts falls short in elucidating any solutions that have any chance of being enacted. Although the section on the importance of libraries and librarians was appreciated.
As Weapons of Math Destruction was to the 2010s and basic machine learning, so this book is to the 2020s and "AI".
So: I loved this book. It set my head straight on a lot of things I intuitively knew, as someone with enough expertise to understand how the "AI" sausages are being made, but was struggling to articulate. And hype is strong! We are social apes! Like - yes - natural language processing (NLP) is amazing and gratifying and weird. (Text is such gnarly data!) But NLP has been eerily amazing for a long while now (I have felt the icy showers for at least 10 years! tf-idf?! whaaat!?), and these stochastic parrots are indeed eerily amazing as well. But! Do they deserve all this hype? Where by "hype", the authors specifically mean: enormous venture capital investments, enormous carbon emissions and re-jiggering of our energy infrastructure, and - perhaps worst of all? - figures of authority waving their arms around a vaguely-defined but definitely civilization-altering "Artificial General Intelligence (AGI)" that is always just beyond the horizon?
The authors argue - aggressively, spicily, wonderfully - that NO! This is all bananas! And I am 100% here for it.
First, they give one of the best plain English primers on neural networks, n-grams, and embeddings. It's one chapter, and it's only semi-technical (intended for non-technical audiences), but it covers, imo, the main ideas in a very clear, comprehensive way. So bravi, there! They also offer refreshing clarity on defining "AI" - a term that is, currently, being abused in everyday conversation, but that normally captures distinct fields in machine learning/comp sci: large language models (LLMs), OCR, computer vision, blah blah, I am tired of linking.
Rather than prognosticating about the future (and, indeed, notice how much AI hype is about the very near future... it's just over the horizon, people!), they instead trace the history of AI (leveling some shots at Minsky and Hinton, wowza), the history of Luddites, and the CURRENT practices of how LLMs are trained, how they are used RIGHT NOW, and how they are talked about. There is a lot about labor (outsourced content moderation is horrible indeed; your boss being sold AI to "boost productivity" == aka, layoffs) and training data bias (duh) and basically plugging the holes in our social safety net with word-prediction machines. All of this was stuff I knew, but they structured it in a clear and helpful way.
The one thing I did NOT know, but blew my mind, was the theory of mind stuff and linguistics (Emily Bender is a linguistics prof at U of Washington). Basically, language includes a lot of "guessing what the other person is thinking/trying to say". That's why you can't teach your baby Italian via TV (believe me, I've tried). It's the *interaction* that matters. The social learning. Because LLMs are so good at sounding human, our brains naturally start to "fill in the blanks" about what they're "thinking/trying to say". This is also why people DO NOT ascribe cognition to "AI artists" when they look at those (frankly very tacky) DALL-E, Midjourney, genAI art outputs. No one thinks an "AI" was trying to "express its consciousness" - we see it as an obviously computer-generated, automated mish-mash of training inputs. But LANGUAGE. Our ape brains get real weird there. Hence all the flailing around "omg AGIIIII".
Anyway, I loved this so much. Should be required reading for everyone in tech.
I spend a lot of time thinking about AI. I have a consulting company that helps businesses adopt AI - to use the tools to automate mundane parts of white collar work. I believe in the good AI does and can do, but I’m also aware of and weary of the bad that it’s capable of.
I approached this book with an open mind. I want to be aware of the criticisms and concerns.
The tone of this book was so condescending and patronizing that I couldn’t take it seriously.
I don’t even know if it’s fair for me to say I finished it. I listened to as much as I could to get the gist. Unfortunately can’t recommend.
Probably the book of the moment. Not overly long, and with a positional stance that is well-defended throughout. The chapter division and chapter titling was a bit suboptimal imo but all domains in which AI has been forced upon me in my experience is in there.
Who would have thought that everything about these "big" automation machines/programms are just bullshit and it's just to make richer people richer? Nothing surprising. After reading this book it makes me even madder to see people use those automation machines or, to quote the authors: "these piles of racist algebra" for fun. Even worse, using those things as means to do stuff. Like can't you write your own E-Mail like you are paid to do? What are you gonna do with "more time" at work? Work more? Work for less?
Why is everyone and their mother so stupid and misguided in thinking ANYTHING of those automations called "artificial intelligence" is intelligent or even remotly useful? Did we learn nothing?
I mean, probably not. On that note: Fahrenheit 451 didn't age well :D Anyway: I am avoiding anything labeled "Ai" like the plague and I will always think people using this earnestly are below me. And I will making fun of everyone.
I was looking forward to this book. I agree with the premise that there is a lot of hype around a technology that people call AI, misnamed because it really has no intelligence.
However, that is not what this book does. It appears that authors like this one put the word 'AI' on the cover and vaguely talk about some of the issues but they just want to use the book as a soap box to preach a woke sermon against white people (thereby being racist), racism, colonialism, climate change, and other such garbage. It isn't that impressive or even relevant, and what is worse, it is often filled with outright lies to support this irrelevance.
The author should stick to the topic and produce another book for those interested, called "I hate white people, the alt-right boogeyman that eats children, and all my other irrelevant woke ideas."
this book is a relief at a time where ChatGPT has become a verb and where we are told that it would be so much effective to just use ChatGPT versus use our precious minds to think/write/research/create.
my favorite two quotes - “Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your grief bots have no soul, and your AI copilots are not gods” “When executives are threatening to replace your job with AI tools, they are implicitly threatening to replace you with stolen data and the labor of overworked, traumatized workers making a tiny fraction of your salary”
am telling all my friends about this book! if someone decides to make tshirts with quotes from the book, am wearing it everywhere.
A necessary and reassuring work in the context of the ‘AI wars’. The authors take a critical approach to the hype cycle around ‘AI’ and explore, with journalistic and academic integrity, both the ‘actuality’ of AI platforms (including LLMs) and the vested interests of capital and power in such. Part investigative expose of tech-bro snake oil and part self-help manual for the AI sceptic, this is an important and brave book that stridently puts forward a side of the debate that is too often written off or ridiculed in the frantic rush to justify the adoption of so-called AI solutions.
Between the errors (mainly NOT about AI itself) and the gotchas in many of the 1 & 2 star reviews, I could go anywhere from 2.5-3.75 stars on fractional rating points.
I eventually, for the second time, decided to do a ratingless review of a book. Maybe it will show up more readily than a 3-star rated review.
Yes, it's a screed. But, some degree of screed is needed on this.
At the same time, the authors ultimately come off as jilted lovers of some sort. And, my favorite Belarusian-American technology sociologist, Yevgeny Morozov, has already nailed this. AI is the most over the top techdudebro version of "solutionism."
On the other hand, many of the 1/2 star reviewers come off as "solutionists," especially big capitalist ones, whose ox is being gored. Others are wingnuts of various types. If I were to give stars, I couldn't go below 3 because of them.
With all of that in mind, and because a commenter on two-star reviewer Nelson Zagao’s Substack mentioned "AI Snake Oil" as a better alternative to this, and because it has the same problems with many of its 1/2 star reviewers?
This is a review of this book and both books' low-star reviewers at the same time, but especially this book's.
Let's dig in.
As noted, I can’t rate it below 3 stars, in part because of some of the types of people it triggers in 1- and 2-star reviews.
One Zionist is triggered over two mentions of Israel, even though they are using AI from Microslob and Google in the genocide in Gaza.
At least three “woke White wingnuts” are triggered. One is a Religious Right wingnut, and the other two are unions and workers haters.
Partially more thoughtful 2-star reviewers are right-neoliberals, at best. (When you 5-star Matt Yglesias AND Ezra Klein, that’s you.)
And some of these general types of reviews show up on related books like “AI Snake Oil.”
Partially more thoughtful to more thoughtful 2-star reviewers claim this book itself is doomerism. I don’t see it that way. In addition, the authors use that term in a more narrow way — and make clear what that way is.
Beyond that, a lot of what they call AI “Hype” is actually hucksterism, pure and simple. (Related? I have long called the owner of Facebook “Hucksterman” even before his own deep dive into AI.) Much of this crap isn’t even AI, in the sense it’s not actually intelligence. And, like peaceful nuclear fusion power, we’ve been hearing that strong AI is just around the corner for 50 or more years. It still is.
The real issue is that none of these people take totally seriously the destructiveness of capitalism. On Nelson Zagao’s Substack, I mentioned, riffing on Schumpeter, the “non-creative destruction” of much of this. (It’s also interesting that a media professor doesn’t really engage with copyright issues, says a newspaper editor.)
That’s where reality vs woke White wingnuts comes in. AI threatens to greatly expand their foothold; AI in the West in general and the US in particular does. In other words, it’s an accelerant, or potential one, of the worst of human behavior.
Back to Zagao. Sometimes, doomerism, in the general sense, not how the authors use it, IS realism. Like climate change now being a climate crisis, even as neoliberals pretend it’s not. Speaking of, there’s AI’s massive energy and water use, and the authors use the phrase “climate crisis.”
As for complaints the book doesn’t distinguish well enough between generative and predictive AI? It may not be perfect, but it clearly talks about both types of AI, even if it’s not saying “HERE’S GENERATIVE AI” (or PREDICTIVE) every time it focuses on just one of the two.
Doomers (outside the real doom of the climate crisis) are presented as a partial flip side of Boomers, in the idea that ‘oh, AI is so powerful it could overwhelm us, but if we fix this ‘alignment’ issue, AI’s massive potential will take off.”
Zagao isn’t even fully a semi-hater, but for the semi-haters and haters? Most know better.
Per Upton Sinclair: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
As for the errors I mentioned above? They’re there.
The Luddite movement? Both they and some of their critics in reviews get it wrong. Machines in the textile industry were being attacked already in the late 1600s. Tis true that cropping machines, looms and knitting frames, specifically the stocking machine, were the specific target at the Luddite peak, but, there you go. Also, as Wiki notes, “Luddite” is often used more generically today.
Also, the original Luddites weren't blank haters of the machines.
Writing errors, sadly, with a linguistics professor as one of the two authors, pop up. It’s not “alright,” it’s “all right.” That’s just one of several. It’s the one that most stuck out.
Now, in the more serious errors?
Outside of AI issues, no, this newspaper editor knows Craigslist didn’t destroy newspapers in general or even newspaper classifieds. Media analysis sites agree; Poynter interviewed Newmark recently, in fact. Monster was much bigger on that. Bigger yet was the slothful response of the newspaper industry. The authors appear to know relatively little about the modern history of print media, other than knowing its implosion and the ownership by vulture capitalism firms of many larger chains.
Missing from discussion of the “alignment” issue? Mark Twain’s knock on the “moral sense,” via Satan III (a la Napoleon III) mocking the boys in “A Mysterious Stranger” for believing the “moral sense” made humans superior to animals. In other words, humans aren’t such hot shit, and even if the AI hype were real, massive human power extension wouldn’t be hot shit either. In other words, their take on the techdudebro worry about "alignment" isn't insightful enough on human nature.
Agree with their take on effective altruism, but they don’t deeper dive into the problems that utilitarianism has, namely that it cannot generate a view from “nowhen.” In other words, utilitarianism in general, let alone so-called "effective altruism," can't really see 6 months into the future, let alone 6 years, and certainly let alone 6 decades.
Now, back to what I said up top.
REALLY missing is noting how AI is Yevgeny Morozov’s “solutionism” writ large. In fact, he’s nowhere referenced in the book. Cory Doctorow gets one brief mention early, then a bit more in the end.
Climate crisis? Absolutely real, but the authors don't have a full grasp on that, either.
The Paris Accords are purely voluntary, not legally binding. They're Jell-O of voluntarism, made that way by Dear Leader Obama and Comrade Xi Jinping. Thinking they are binding? Might actually exacerbate the climate crisis.
Again, the authors strike me as a pair of jilted lovers, who think they’ve discovered this grand new secret. The neoliberal-like ignorance about Paris was the capper, but not even citing — and perhaps not even knowing — Morozov was the keystone.
A short, lightly snarky, but comprehensive intro to how the big LLM's work, why they're an ethical nightmare on every possible level, and the scam at the core of their existence. Also dips into some of the grimy ideologies of the tech world.
This is the passive aggressive Christmas present of 2025, folks. It covers a lot of the same ground of some of the more in-depth anti-AI non-fic around atm, but as it's written for readers who don't come from a tech background, it includes some really helpful background and is just really easy to process.
Appreciated the broad look at where AI is now from a more skeptical point of view. As we’re inundated with news and applications of AI and how it’s an inevitability that it’ll replace jobs and infiltrate nearly every aspect of our lives, it was nice to hear how the loudest positive and negative forces in the AI debate are likely both wrong. Similar to a “Last Week Tonight With John Oliver” episode, this book was very informative on certain things, possibly purposefully less informative on others, and ultimately had the essence of mocking something (AI hype) for the author’s and audience’s comfort.
A well needed antidote to AI hype and a disturbing breakdown of the intentions and ethics (or lack thereof) of the tech bros behind it.
Key takeaways: * LLMs (e.g. ChatGPT) depend on an army of underpaid and exploited workers in African and Asian countries who are subjected to disturbing content and practices (like trying to get the LLM to tell them to kill themselves so they can prevent this happening for users). * The data powering AI text and image generators was STOLEN from artists, writers, journalists, and normal internet users without their consent or fair compensation to be MONETISED by the corporations (Meta, OpenAI, Microsoft et al) and then privatised. The entire business plan based on theft. * AI is ridiculously damaging to the environment. * AI is riddled with errors, is often half baked and in no way should replace humans, especially in care roles or roles that require nuance (healthcare and law). * Most of what is marketed as AI is not even AI.
My only criticism of the book are some of the ridiculous "academic views" that get shoehorned throughout, but I can look past these as the pros outweigh the cons and the majority of the book is valuable.
I think this is a great discussion of the current state of big tech and AI. if nothing else, I recommend the first chapter for a great description and discussion of what "AI" actually is without requiring much tech knowledge.
"Those who resist the imposition of technology are disparaged as technophobes, behind the times, or incompetent, sometimes even 'Luddites.' But in fact, 'Luddites' is exactly the right term, even as those using it as an insult don't realize it. In the tradition of the original Luddites, writers, actors, hotline workers, visual artists, and crowdworkers alike show us that automation is not a suitable replacement for their labor. We don't have to accept a reorganization of the workplace that puts automation at the center, with devalued human workers propping it up." (66)
"People are far from perfect, subject to bias and exhaustion, frustration, and limited hours. However, shunting consequential tasks to black-box machines trained on always-biased historical data is not a viable solution for any kind of just and accountable outcome." (76)
"The law and language have a special relationship: the law happens in language. Beyond that, it happens in language used in a particular way--lawmakers write policies into existence. On one level, the policies exist only as those words, while on another, they can have enormous impact on individuals, communities, and the entire planet through the ways that they shape behavior. And so the words must be chosen with expertise in order to have the intended effect, not only after the policy is established, but also in the long term, when the legal and social context in which they are being interpreted will certainly have changed. The drafting process therefore should be done with care and not farmed off to a system that can swiftly create something that sounds good." (78-9)
"The last thing we need is shiny tech that promised to obviate the need for the hard work of building inclusive scientific communities and putting those perspectives in conversation." (119)
"Fortunately, there are ways to resist. At an individual level, we can overtly value authenticity. Refuse the apparent convenience of chatbot answers and insist on going to original sources for answers to our own queries." (173)
"At no point, however, does calling any of this technology 'AI' help. This term obscures how systems work (and who is using them to do that) while valorizing the position and ideas of power holders. Speaking instead about automation and data collection helps to make clear who us actually being benefited by this technology, and how. If we are to create a future that is populated with technologies we want, we 'can't only critique the world as it is,' as science and technology scholar Ruha Benjamin has written; we also 'have to build the world as it should be to make justice irresistible.' Part of that vision means technology ought to be created with full participation of the people it impacts. Following disability justice advocates, we say 'nothing about us, without us.'"(190)
"We should stop giving AI boosters the benefit of the doubt. They are indexing their fortunes--and mortgaging ours--on a future that doesn't exist and that won't suit us at all." (191)
"We don't have to accept technologies that will do us harm, no matter how well they are tested or honed. Some technologies--like facial or emotional recognition--should be objected to on the grounds of what they are intended to do, and how they dehumanize and rank individuals." (192)
Great summary of a lot of the ethical and legal considerations of “artificial intelligence.” One of the main things that stand out to me is that what we’re being told is “AI” isn’t actually artificial intelligence. Large language models and algorithms have been around a while, but the public doesn’t necessarily know that. “AI” is being sold to us as a tool of automation and as a potentially sentient form of technology. In reality, it requires quite a lot of human labor that is mostly hidden and exploited. “AI” is a marketing slogan, not an accurate description.
One of the biggest red flags to me is that these “AI tools” are so loudly hyped by capitalists. Always follow the money. Too many people have way too much trust in our corporate overlords. These tools are being intentionally rolled out to reduce labor costs, which is a roundabout way of saying firing as much of the workforce as possible and leaving many people without jobs. So, at what point will a universal income be enacted if these robots start doing all our jobs for us?? That’s never mentioned by the “AI” hype people because of course it isn’t. The rich want theirs and screw the rest of us. What is being called “AI” is not capable of producing art, practicing medicine, being a therapist, teaching, etc. Humans are still very much a necessity to doing quality work of all kinds.
These “models” were trained not by their creators but by the stolen work of artists, researchers, authors, journalists, etc. Using “AI” to generate images and text for you is lazy theft, at best. There are numerous legal considerations that are being completely ignored, and the authors make the case that no new legislation is needed to start reining in much of the worst of the bad behavior being done by “AI” developers. We don’t necessarily need new laws we need our representatives to enforce existing copyright and labor/consumer protections.
The environmental damage these things are causing is the real cause for alarm, not that these things are going to become sentient any day now and destroy us all. They aren’t. But what really is happening is they’re worsening climate change by exponentially increasing energy and water use, and they’re perpetuating racism, sexism, and xenophobia. They’re messing up workplaces, causing psychological harm, making a mess of research databases, etc. These “tools” are marketed as solutions to all our problems but are in fact they are making them worse. That’s truly the sickening part. It seems to me it’s going to take legislative action to enforce responsible use of large language models and powerful algorithms. The billionaires (and wannabes) clearly aren’t motivated to do it themselves.
This book was recommended to me by a friend, whose graduate school advisor was one of the authors. I think it's important to learn to cut through the hype and marketing about new technologies that may not actually be doing anything helpful, and in some cases are actually harmful.
I was first introduced to the "AI issue" by concerns about plagiarism from artists and writers. To learn a little more about how an art model works, I recommend this blog post: https://waifulabs.com/blog/ai-creativity
There is a lot of confusion about what exactly "AI" is, and the answer as given in this book is that it's really not one specific thing -- it's a collection of technologies and in some cases just marketing-speak applied to promote older tech like databases and spreadsheets. It can have useful applications, but also harm in the form of amplifying existing racial and other biases in datasets, polluting the information environment, plagiarizing and otherwise not adequately sourcing ideas, eliminating human jobs or making their duties more onerous, and using so much electricity that it is causing tech companies to miss their carbon emissions targets.
I liked the information about how large language models like ChatGPT work. It was explained in terms that I could understand without much of a tech background. I have seen an alarming increase in people "asking" AI about all kinds of things, as though it is a search engine or even some kind of magic oracle that knows all. I knew that was misguided or could even be dangerous (such as the example of AI-generated mushroom foraging books sending people to the hospital), but this helped me understand more of why, and also why people have a tendency to believe it (the concept of imagining a mind behind the words). When I was in high school and college, we had a lot of training about understanding and evaluating the sources of information and whether they are trustworthy. There were a lot of warnings about how you can't believe all of what you read on the internet and it is important to track down and consider the sources. That is becoming more difficult in the age of AI where tools like Google's "AI summary" of results spit out summaries including websites that may not be reliable. One thing the book didn't really get into is the danger of people intentionally trying to poison or troll those results with fake information, which can be funny, but also dangerous in cases that are not as obvious as something like "battery acid soup."
There were a few different points made through the course of the book, with lots of examples and references. I haven't listened to the podcast, but at times the prose became rambling and I had trouble following what the original argument was supposed to be. It may work better for a podcast episode than a book.
I didn't like the fact that the overall tone in some of the chapters seemed opposed to any kind of helpful productivity software or tools in general. For instance, OCR technology has been really helpful in cutting down keystroke data entry, which is monotonous and not particularly engaging. I also don't see anything wrong with the given example of using an article template to report on high school sports games, but the content should of course be reviewed for humans by accuracy (a step that some companies want to skip to save on staff). The point that eliminating entry level jobs also eliminates the path to advancement is a real concern, though.
There were some arguments presented such as "the project of identifying general intelligence is inherently racist and ableist to its core" (33-34) that made assumptions that I didn't really agree with, at least not as presented in the book. I understand that trying to compare humans is problematic, as it's impossible to design a test without cultural biases, and one would have to come up with a way to "weight" different strengths a person has. But this seems to be looking at the question from a certain angle, which might not apply to comparing humans as a whole to non-human entities. We set up tests for animals like octopi and birds and consider them "intelligent" due to their abilities to use tools, problem-solve, or other traits that we consider humanlike. When I mused on this to my friend, she said that the issue here is that the big tech companies aren't really interested in all of this, just using the pre-existing datasets and their biases, which does make sense. Another issue is that computers can be really good at doing the specific thing they were programmed to do (i.e. chess) but that is literally all they can do, because they are good at chess doesn't mean one could assume that they are also good at other related logic tasks that humans might be (like planning battle strategy).
Environmental issues are another big one. All these "AI" technologies use a huge amount of energy, and in many cases the result is worse than what a human could do. So... what is even the point? What are we doing all this for if the world ends up being nothing but computers and robots doing everything?
I think much more likely than the "doom" scenario of computers taking over the world and killing everyone is people becoming so reliant on these tools that they are paralyzed if they are suddenly unavailable. We've already seen shades of that with cyberattacks or the CrowdStrike outage last summer, which locked me out of my work computer for a whole day until the fix managed to apply.
Overall, it was an interesting book and gave me some things to ponder, but the rambling organization and not-well-explained assumptions/arguments about certain things prevented me from rating it higher.