Eh 👁️

Sometimes I think the secret to a happy life is the ability to manage cognitive dissonance. I’ll give you an example. Over brunch, a friend tells you they believe the apocalypse is imminent. You agree, not because you want to be polite, or because you’re playing along with a bit, but because you read the news too, and so it’s obvious that humanity is fucked. Believing this, you might skip the egg-white omelet and opt for a full stack of chocolate chip pancakes — if we’re fucked, calories don’t count. Or, you might dine and dash like you’re Ricky Schroder in a very special episode of Silver Spoons — if we’re fucked, there’s no sense paying your bills. Or, if you’re really embracing the apocalypse now scenario, you might order the chocolate chip pancakes, skip out on the bill, steal a motorcycle and a pair of ass-less leather chaps, and go marauding in the wasteland. Point is, you believe there’s no future, and yet you act as if the future matters. This is cognitive dissonance. The ability to manage it is what makes it possible to enjoy brunch, on the one hand, and stomach the news, on the other.

Lately, I’ve been working overtime to manage the cognitive dissonance associated with artificial intelligence. As a writer, I spend an unhealthy amount of time in online writer communities that tend to view everything about AI as unethical and immoral. Half the posts are about how the AI crowd consists of Bond villains bent on stealing everyone’s material, putting us all out of work, and sucking up the Earth’s resources. I (mostly) agree with that sentiment. The rest of the posts are about how writers who use AI, in any way, are class traitors who should fuck off and die, or at the very least refrain from calling themselves writers. I know these posts are written by humans, but I can’t help but notice that, in the aggregate, they commit one of the sins people level at AI writing, namely that it’s cookie-cutter slop. Turns out, originality and voice are difficult for humans and machines. Anyway, I (completely) disagree with the class traitor genre of posts.

On the other side of the cognitive divide there’s the world as it is, not as we wish it to be. Here, AI is increasingly ubiquitous, not because it has — or will — live up to Sam Altman’s wildest dreams hype, but because there are countless buggy, not-quite-ready-for-prime-time AI tools providing real utility to real humans who aren’t spending their time raging against the machines. Put another way, you can go full-ostrich on the AI Revolution, and you can scream into the sand that it must stop now, but the world will continue spinning.

When it comes to AI, I have one foot in each camp. My heart is with the idealists, but my head is with the realists. In practical terms, that makes life tricky in the same way that I imagine being an undercover cop is tricky. At work, I am pro-AI. Among my fellow writers, I am anti-AI. Neither one of these identities is core to who I am, but like the undercover cop, my life and livelihood depend on saying the right thing, at the right time, to the right people. More importantly, each situation requires me to believe what I’m saying, even though I contradict myself. And in fact, I do believe that AI is:

Awful / Wonderful

Depressing / Exciting

Over-hyped / Under-rated

Wasteful / Efficient

Destructive / Constructive

I could go on, but you get it. Two conflicting ideas, one human brain, and a buttload of cognitive dissonance. Which brings me to the week that was.

At work, I edited a piece about the AI gender gap. Turns out, men are using AI at higher rates than women, which means women are in danger of falling behind. Unlike the women in the writing communities I belong to, the woman who wrote the piece doesn’t have the luxury of going full-ostrich on AI. Actually, she doesn’t believe any woman has that luxury, regardless of occupation; that’s why she wrote the piece.

While editing that piece, I came across an essay about AI denialists, aka the ostrich crowd. I recognized my peers among the denialists, but perhaps more importantly, I also recognized myself.

Also at work, my boss said they’d reimburse me for subscriptions to Claude, ChatGPT, and other AI tools. Later that day, I used Claude to perform a task that we previously would’ve considered important, but not worth the time. With Claude’s help it took a few minutes instead of a few hours.

In my spare time, I joined my friend , who hosted a group writing session on Zoom. Alex had to skip out in the middle of the session, so he put in charge. Seth put on some music. I wrote my ass off, and at the end of the session, I complimented the music. That’s when Seth dropped a bomb: The music I’d been jamming out to was AI-generated. Seth joked that every writer on Substack would come at me with pitchforks. It was funny because it was true. Sort of.

Also in my spare time, I signed up for a subscription to ElevenLabs, an AI company that specializes in audio. I wanted to try their voice cloning tool. The idea of cloning my voice sounded creepy and strange, but it also sounded cool and (potentially) useful. I’ve always wanted to create audio versions of my stories. In fact, my dream isn’t to publish books, but to produce audiobooks, because audio is my primary way of experiencing fiction and nonfiction. I’ve worked on performing my own books and experimented with hiring voiceover artists. The results haven’t been great. Meanwhile, Substack provides readers who use their app with an AI voice that reads my stories. And of course there are dozens of non-Substack tools that do the same thing. In other words, my stories are already being performed by AI, whether I like it or not.

I am he as you are he, as you are me and we are all together

To clone my voice, I asked my friend Todd to make a file of me telling him Situation Normal stories for a podcast we did together. ElevenLabs said I needed at least 30 minutes of material; Todd was able to put together 3 hours of me.

It took a few hours for the AI to clone my voice and a few more hours of tinkering to dial it in. Actually, the tinkering continues, but that’s another post. The point is that in a single afternoon, I used an AI tool to produce Clone Michael, who it turns out, does a far better job of reading my stories than I do. I was upset / excited. See: cognitive dissonance. Anyway, this is what Clone Michael sounds like reading a Situation Normal story called “We’re doomed, says the barista.”

I don’t know what will become of Clone Michael. There’s more tinkering ahead and more experiments. My hope is that Clone Michael will walk, run, and eventually fly, where real Michael only managed to crawl. If Clone Michael ends up reading my stories, it’ll be because I believe AI will empower, not replace, me.

But maybe there’s an AI that’s better than Clone Michael. While futzing around on the ElevenLabs website, I noticed that they also offered licensed celebrity voices. Again, I felt like I was looking at something that was creepy / strange / cool / useful. One of those voices was Clone Burt Reynolds. Naturally, I needed to know how Clone Burt Reynolds compared to Clone Michael, so had it read the same story.

As turned out, I liked Clone Burt Reynolds a lot better. Which makes sense. It’s Burt freaking Reynolds! And I guess that’s the point. For all the whiz bang technology that goes into AI, it’s the quality of the inputs that determine the quality of the outputs. As computer programmers say: garbage in, garbage out.

Or maybe not.

Burt Reynolds was better than Clone Burt Reynolds, but Clone Michael was better than me. In other words, the same AI tool made one thing (me) better and another thing (Burt Reynolds) worse. That’s the deal with tools. Fire can keep you warm, and it can burn you; a printing press is equally capable of spreading lies and truth; wheels can bring food to starving people and transport an army bent on starving the people; the internet connects society and breaks it apart. Maybe that’s why I’m torn between the two AI camps. I’m primarily worried / hopeful about people, and far less freaked out / geeked out about tools like AI.

A new project from yours truly: Slacker Noir

I launched a new newsletter called Slacker Noir. It’s a place for me to talk about crime & mystery fiction and share book news, like how I’m making good progress on a sequel to Not Safe for Work. Slacker Noir is free, and true to the slacker ethos, I’ll send out new posts when I get around to it.

Cut Me Some Slack

A book for people who 💙 this newsletter

Not Safe for Work is a slacker noir murder mystery set against the backdrop of the porn industry at the dawn of Web 2.0. Like everything you read here, the novel is based on my personal experience, and it’s funny as hell. If you love Situation Normal, there’s a 420 in 69 chance you’ll love Not Safe for Work.

Not Safe for Work is available at Amazon and all the other book places.

*The ebook is .99, so you can’t go too far wrong. Just sayin’.

IAUA: I ask, you answer

Which AI camp are you in? Hint: Both and neither are acceptable answers.

Egg-white omelet, or chocolate chip pancakes?

🧍🤖?

Burt Reynolds?!

Am I wrong, or am I wrong?

Leave a comment

New here?

Drop your email address in the box to receive future editions of Situation Normal. And if you’re a long-time situation normie who wants to support my work, please consider upgrading to a paid subscription.

Subscribe now

 •  0 comments  •  flag
Share on Twitter
Published on August 10, 2025 01:22
No comments have been added yet.