Manny’s review of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want > Likes and Comments

61 likes · 
Comments Showing 1-25 of 25 (25 new)    post a comment »
dateUp arrow    newest »

message 1: by Jim (new)

Jim Garth from Wayne's World said it best, "We fear change."

fwiw, I'm inundated with countless ads about AI products on Youtube. I delete them all, but the onslaught of AI grifters is enormous.


message 2: by Manny (new)

Manny They are absolutely right that AI opens up countless opportunities for scam artists. But it does not follow that it is itself a scam. They really disappointed me. What people need to do is understand it better, and they went in the exact opposite direction.


message 3: by Icarium (new)

Icarium AI would've been great for the world if it was limited to be a tool be help with medical and technological research.

But what we have instead done is dump the majority of resources on a bunch of genai modules (which are just clones of each other) that are made by training on stolen art.


message 4: by Manny (new)

Manny Icarium wrote: "AI would've been great for the world if it was limited to be a tool be help with medical and technological research.

But what we have instead done is dump the majority of resources on a bunch of ..."


To be fair to the book, they are good at describing how AI companies steal data, and also how AI platforms are abused to create worthless derivative works that flood the market.

Yes, I am much happier about medical and tech applications. We have an AI-based tech project, and it is going really well. AIs are also surprisingly good at philosophy. If you haven't already done so, take a quick look at the paper "Do People Understand Anything? A Counterpoint to the Usual AI Critique", which we posted last week.


message 5: by Keith (new)

Keith B What are your thoughts on how often LLMs hallucinate? Do you think the inherent unreliability of LLMs for many / most use cases would be a better argument from the authors’ perspective?


message 6: by Manny (new)

Manny Keith wrote: "What are your thoughts on how often LLMs hallucinate? Do you think the inherent unreliability of LLMs for many / most use cases would be a better argument from the authors’ perspective?"

I don't think "hallucinate" is a helpful word, and "bullshit" even less so. Like all data-based software, LLMs sometimes get things wrong; it's a question of doing engineering to reduce error rates, and there has been huge progress over the last couple of years. RAG has been the game-changer here. A pure LLM will always have a high error rate, an LLM which is connected to the world through databases, the Web and other external tools can have a much lower error rate. It is simply incredible that they fail to mention this. Do they never read anything except their own rants?


message 7: by Keith (new)

Keith B Yeah I’m fairly familiar with RAG and fine-tuning. I honestly wish we had better data / evaluations from real world projects (especially from software engineers / MLEs in large companies) rather than problematic benchmarks and word of mouth. There’s some academic research that attempts to answer the question of how often current LLMs hallucinate with RAG but it’s usually very specialized and limited. Mostly dealing with medicine, law, or software. But I’d love something more holistic and in-depth.

Personally I’m more of an LLM realist. I think they have a few strong, limited applications. But nowhere near the degree of usefulness that they’ve been marketed as having. But hey, we’ll see whether the training plateau persists or not. Maybe someone will invent better architectures than transformer models.


message 8: by Manny (new)

Manny Oh, I think we're just beginning to understand how to use them. In our project, we're creating tailored pedagogical picture books for language learning; the student gives a sentence or two saying why they're interested in learning the language in question and who they are; the AI writes the texts, does linguistic analysis, adds translations and creates images, all on its own. As each new model is released, the error rates go down.


message 9: by Cecily (new)

Cecily From my relatively limited knowledge and experience, it strikes me that a key issue when discussing the problems/opportunities of AI is lumping it all together. Generative AI is perhaps the most problematic in terms of training/copyright and hallucinations (and water/electricity?), but in many other areas, specialist AI can spot patterns and errors far faster than humans.


message 10: by Samael (new)

Samael Yetzerhara This is literally the stupidest Goodreads review I think I have ever read, and I frequent r/BadReads.


message 11: by glossyboo (new)

glossyboo Why are we comparing the criticisms against AI to sexism and racism? This seems bad faith to me.


message 12: by Manny (new)

Manny Cecily wrote: "From my relatively limited knowledge and experience, it strikes me that a key issue when discussing the problems/opportunities of AI is lumping it all together. Generative AI is perhaps the most pr..."

Hi Cecily! Absolutely, I agree with you. I am in no way suggesting that everyone working in GenAI is careful and ethical about copyright. It's clear that isn't true. And I'm not suggesting either that all uses of GenAI are careful and ethical, also not true. But these are different questions from "Does GenAI actually work?"


message 13: by Manny (new)

Manny Samael wrote: "This is literally the stupidest Goodreads review I think I have ever read, and I frequent r/BadReads."

Well, I can see I've got your attention.

What aspect of the review do you think is so impressively stupid? You're not being very specific here.


message 14: by Manny (new)

Manny glossyboo wrote: "Why are we comparing the criticisms against AI to sexism and racism? This seems bad faith to me."

If it's not clear, I'm arguing that people in all these cases are replacing careful, evidence-based thought with prejudice. And the worst thing about prejudice is that people more often than not aren't even aware that they're prejudiced. A sexist may tell you it's obvious that women are not as intellectually capable as men, "everyone knows that", and just discard as irrelevant any data that undermines their claim. It seems to me that some (not all!) of the people criticising AI are using similar rhetorical moves.

Of course, I can see why the AI critics are angry. Many of the AI boosters are even more dishonest and are making huge amounts of money from their dishonesty. But sinking to your opponent's level is usually not the right way to go. You should show you're better than them.


message 15: by Ben (new)

Ben Landrum Emily Bender has been annoying since before transformers were a thing. Funny that she finds a way to harp on peer review in this, she famously hates arxiv because it isn’t peer reviewed.


message 16: by Manny (last edited Sep 28, 2025 08:08PM) (new)

Manny I actually quite like some of Emily Bender's work. I think she's got a lot of good things to say about computational grammars, a subject I've also been heavily engaged with. But to me, her comments on AI seem more emotional than fact-based. There is knock-down evidence that AIs are extremely good at some challenging tasks (chess and Go are obvious cases), and as far as I can see she just ignores those tasks or defines them away.

When you've cracked Go, which for a long time was considered impossibly hard, why shouldn't you be able to crack many other problems? It doesn't make sense to me.


message 17: by Warwick (new)

Warwick I'm not sure that discriminating against other humans on the basis of their sex or race is really the same as discriminating against machine learning models. Why exactly Emily Bender wants to discriminate against them I'm not sure, not having read the book, but I do think there could be legitimate reasons for doing so that are perfectly ethical. Discriminating against them might even be *more* ethical (thinking about for example the use of a village's worth of water and electricity to provide the data allowing LLMs to write thousands of bland LinkedIn posts).


message 18: by Manny (new)

Manny There are plenty of good ways to argue why we shouldn't be pouring huge resources into running LLMs. I agree with that. But most of the time she's saying that the whole thing is a con, it doesn't work, and she makes it all emotional and personal. To me it doesn't read like objective criticism, it reads like hate speech, even though it's a very odd kind of hate speech.

As I say at the end, I don't think she's doing herself any favours here. I doubt she'll convince anyone who is well acquainted with the facts.


message 19: by Manny (new)

Manny Samael wrote: "This is literally the stupidest Goodreads review I think I have ever read, and I frequent r/BadReads."

Oh wait, I'm very slow... I just realised you're the person who one-starred all of my books yesterday. Also, a quick search, as I pointed out elsewhere, shows that your name is a pseudonym referring to some obscure Judaic theology.

Uh, I hate to ask such a rude question, but is it conceivable that you're one of the authors, or a friend of one of the authors? It's rare for people to react this violently without there being some kind of personal connection. There are a lot of stupid reviews on Goodreads, I don't think there's anything particularly stupid about mine. Obviously some people will disagree with it, this is a controversial topic, but I'm pretty sure that nearly all of the people who disagree will just move on.


message 20: by Nick (new)

Nick Hey Manny, really enjoy reading your reviews, especially those concerned with AI. You mention _Robot Rules_ here and I think I remember reading something about _Superintelligence_. Since the field is changing so rapidly (as you point out) are there other books you would recommend as good introductions to the field for the layperson that aren’t potentially (or already) out-of-date?


message 21: by Manny (last edited Oct 02, 2025 01:32AM) (new)

Manny Thank you Nick! It was actually Robot Rights, a book which is sufficiently high-level in its approach that it's arguably become MORE relevant since it appeared! Maybe Superintelligence too. Kind of scary that we're just building the superintelligences and assuming we'll be able to control them, while Boström was ten years ago giving pretty sensible reasons to wonder if that actually makes any sense.

Looking through books I've read over the last year which might be worth recommending, I'm distressed to see my eye stopping at Rhodes's classic The Making of the Atomic Bomb. Once again, we have a revolutionary, transformative and extremely dangerous technology that we're developing far too quickly because hey, what choice do we have? The other guys could get there first!

What was it Winston Churchill said was the only thing we learn from history?


message 22: by Nick (new)

Nick Thanks, Manny! I will check all three of them out. Yes, the parallels to the atomic bomb seem alarmingly close. Appreciate these recs and taking the time!


message 23: by Manny (new)

Manny I wonder when we'll see the first seriously dangerous application of GenAI. My guess is pretty soon. Maybe it's already happened but so far only a few people know.


message 24: by Nate (new)

Nate I'm going to ask a series of questions. Please keep an open mind.

"the explanation almost certainly isn't that Black people are biologically inferior, it's that they tend to be poorer and receive worse schooling. "

Why is it the case that they tend to be poorer and receive worse schooling? Do you believe that IQ tests are a reliable way of assessing intelligence? Who came up with IQ tests?

"...they apply verbal tropes to AIs which, if similar tropes were applied to humans, would immediately be called hate speech."

If I say "pieces of shit smell terrible", would you consider that racist? If I said the same thing about a particular ethnic group instead of a piece of shit, it would be racist.

Do you believe that the only thing that makes humans intelligent is our capacity to produce thoughts? Is perception not part of intelligence?

"Suppose that someone were to start marketing hyperrealistic sex dolls, which could talk, scream, cry, bleed etc in a convincing way, and which could be "raped" and "killed" for a suitable fee."

When you wrote this, did you imagine these hyperrealistic sex dolls as women or as men? Are large language models gendered in the same way?

Would you have an ethical problem with a violent video game in which you inhabit the role of a soldier fighting in WW2? Would you feel differently if the video game in question involved committing violence against Afghan children?


message 25: by Manny (new)

Manny Nate wrote: "I'm going to ask a series of questions. Please keep an open mind.

Interesting questions! I will try to answer as fairly as I can :)

"the explanation almost certainly isn't that Black people are biologically inferior, it's that they tend to be poorer and receive worse schooling. "

Why is it the case that they tend to be poorer and receive worse schooling? Do you believe that IQ tests are a reliable way of assessing intelligence? Who came up with IQ tests?


Here, I am just citing a paper I read on the subject quite a long time ago, that struck me as sensible and unbiased. It was in the collection The Rising Curve: Long-Term Gains In IQ and Related Measures

"...they apply verbal tropes to AIs which, if similar tropes were applied to humans, would immediately be called hate speech."

If I say "pieces of shit smell terrible", would you consider that racist? If I said the same thing about a particular ethnic group instead of a piece of shit, it would be racist.


If pieces of shit could use language, I would consider it racist. But of course they don't.

Do you believe that the only thing that makes humans intelligent is our capacity to produce thoughts? Is perception not part of intelligence?

I would say thought and language are intuitively the most important things to most of us.

"Suppose that someone were to start marketing hyperrealistic sex dolls, which could talk, scream, cry, bleed etc in a convincing way, and which could be "raped" and "killed" for a suitable fee."

When you wrote this, did you imagine these hyperrealistic sex dolls as women or as men? Are large language models gendered in the same way?


I was imagining them as women (as far as I know, almost all sex dolls are female), but making them male puts an interesting spin on it! You have an SF short story with a great twist ending there :)

Would you have an ethical problem with a violent video game in which you inhabit the role of a soldier fighting in WW2? Would you feel differently if the video game in question involved committing violence against Afghan children?


I don't much like violent video games in general. I'd be extremely unhappy with any video game which involved hurting or killing children and wonder if they are legal.


back to top