Two Great Articles On Generative AI
Writer Beware talks about how generative AI is causing a new round of super-targeted scammers. They feed your book into the chat bot, which then generates a highly personalized email praising the book and offering “marketing services.” I got a ton of these scam emails after STEALTH & SPELLS ONLINE, GHOST IN THE SIEGE, and BLADE OF FLAMES came out, and then a bunch more after MALISON: THE COMPLETE SERIES did well on Bookbub at the end of August.
I started out with a mildly negative opinion of LLM-based generative AI tools in 2022 and 2023, but I wanted my opinion to be an informed one, so I’ve experimented with them on and off and read a good bit about them. And as I’ve experimented with them, my opinion has moved from mildly negative to highly negative, and finally arriving at completely anti-AI this year. I never used AI for any of my books, short stories, or cover images. I experimented a bit with using AI images for Facebook ads, but people generally hated them, so I stopped entirely with that. (In fact, Facebook ads have become less effective this year because of all the AI stuff Meta has forced into them, but more on that later.)
So why did I arrive at highly negative? These tools do not actually do what their advocates promise and are hideously expensive to run. The enormous costs and downsides significantly outweigh any benefits.
In addition to the problems mentioned in the second article (cost, false promises, economic bubbles, and blatant lying about capabilities), I think the fundamental difficulty with generative AI is that it’s essentially a cognitive mirror on its users. Like, a Narcissus Machine, as I’ve called it before.
What do I mean by this? In Greek myth, Narcissus was enraptured by his own reflection. LLM based AI is essentially Very Fancy Autocomplete, which means it guesses the most likely response to a prompt based on statistical likelihood. In other words, it ends up mirroring your own thoughts back to you.
So I think LLMs are highly prone to inducing an unconscious Confirmation Bias in the user. “Confirmation Bias” is a logical fallacy where one interprets new information as confirming one’s existing beliefs. I think even highly intelligent people using LLMs are prone to this, because the AI model settles on what is the most statistically likely response to the prompt. Which means that, consciously or not, you are guiding the LLM to give you responses that please you! This is why you see (on the tragically hilarious side) people who are convinced they’ve invented a new level of physics with the LLM or taught it to become self-aware, and on the outright tragic side, people who have mental breakdowns because of their interaction with the LLM.
Grimly enough, I suppose the problem is going to sort itself out when the AI bubble crashes, whether in a few months or a few years. As the linked article mentioned, AI companies have no clear path to profitability save for chaining together infinite NVIDIA graphics cards and hoping they magically stumble into artificial general intelligence or a superintelligence.
The downside is that this is going to cause a lot of economic disruption when it all falls apart.
I know I’m very negative about AI, but 1.) I see hardly any good results or actual benefits from the technology, 2.) lots of technology products are becoming worse from having AI stuffed into them, and 3.) what few good results have come about will not last because the data centers are burning cash like there’s no tomorrow.
-JM