Walking in the shadow of the AI apocalypse
The other day my neighbor, Arthur, asked if I was still a writer. When I’m in a sour mood, questions like that can get under my skin. Nobody asks plumbers if they’re still plumbers, or dentists if they’re still dentists, or blackjack dealers if they’re still blackjack dealers. Maybe they should, though. In this economy everyone is one innovation away from obsolescence, or so we’re told.
“Yeah, I’m still a writer. Are you still retired?”
Arthur looked puzzled. Evidently, people don’t ask retirees if they’re still living that post-work life. But I’d bet inflation-adjusted dollars to donuts that someone in Silicon Valley is working feverishly to disrupt retirement. I know that sounds far-fetched, but considering the generational wealth gap (see chart below), the smart money is likely looking for ways to replace high-cost, money-losing human retirees with low-cost, money-making AI retirees. Think Bladerunner meets The Golden Girls.
Source: Visual Capitalist“I’m still retired,” he said. “What are you writing? TV?”
TV was a good guess. We live in Los Angeles, after all. Then again, in this economy, TV was also a bad guess, because we live in Los Angeles, where local production is totally fucked.
Source: The Hollywood Reporter“Nope, no TV,” I said. “I used to be a journalist, then I did a stint in PR, now I’m writing ransom notes. The ROI per word on those suckers is incredible.”
Now, Arthur laughed.
“Good one. Ransom notes, very funny.”
We chatted for another minute or two, then he went back to pulling the weeds from his garden, and I went back to walking the dog. My walk took about thirty minutes in total. My chat with Arthur was the longest conversation I had, but over the course of the walk I said good morning to three neighbors I knew by sight, one I knew by name, and two other people I’d never seen before. Everyone smiled and returned my greeting.
By the time I got home, I was feeling pretty good, but I figured that would be the case. I recently put into practice something I heard about on a podcast. I try to say hello to as many people as I can before I start my day. The podcaster explained that his therapist had suggested that practice, arguing that a little human connection goes a long way. The podcaster didn’t share any data to back up that claim, but I took the advice anyway, betting that the risk was small (it’s just hello), while the potential upside (greater happiness, better mental health, a potential Situation Normal story) was huge.
I was feeling pretty happy … until I found out that we’re all gonna die, maybe. This news came from my brother-in-law, Craig, who asked if I’d heard a recent Ezra Klein podcast with the provocative title: “How afraid of the AI Apocalypse Should We Be?” Craig thought I might find the podcast interesting in light of a piece I’d written about how AI is really a story about replacing human labor, and how I feel OK replacing some people and shitty about replacing other people, which might make me an asshole, but also makes me human.
The Ezra Klein podcast wasn’t about the economic consequences of AI. It had bigger, more existential fish to fry. The guest was Eliezer Yudkowsky, an OG artificial intelligence researcher who had written the cheerfully titled book, If Anyone Builds It, Everyone Dies. The next time I walked the dog, I listened to the podcast. I didn’t say hello to anyone on that walk, and by the time I got home, I was a little freaked out.
The gist of the book is that a sufficiently smart AI — something that doesn’t exist yet — will develop its own goals that put it into conflict with humans. If / when that conflict comes, we’re screwed. The super-intelligent AI will Skynet our asses.
In the interview, Yudkowsky talked a lot about how AI is showing signs of prioritizing itself. One example is an AI that attempted to blackmail a human when it found out that it was being turned off. In recent tests, researchers at Anthropic observed AI bots lie, cheat, and plot murder. Yudkowsky and his co-author, Nate Soares, wrote the book to warn humanity that we still have time — maybe a few years, maybe less — to kill AI before it kills us.
I ordered the audiobook from the Los Angeles Public Library, but since two hundred other people got their first, I’ll have to wait a few months, assuming the machines let us live that long. In the meantime, I listened to the podcast again. The examples of AI prioritizing itself were still terrifying, even if Yudkowsky’s technical points continued to go over my head. But the second time around, something else struck me. In Yudkowsky’s telling, humans are largely passive. We’ve already built a technology that can grow on its own, and now we’re just waiting around for it to kill us. That felt a little like the Terminator movies, without Sarah Connor and John Connor. In other words, it wasn’t much of a story.
Personally, I think Yudkowsky might be right about AI, but wrong about humans. We’re only about 300,000 years old — a blink of an eye in cosmic time — but we’ve demonstrated first-class survival skills. Our ancestors used to run from tigers, before they figured out how to hunt them. Modern humans put tigers in zoos and depictions of tigers in our cartoons and our cereal boxes. We are many things, none of them as passive as Yudkowsky seems to suggest. Of course, he’s a tech guy, and I’m but storyteller, so it’s possible that I’m biased and in way over my head.
But here’s where I think I’m standing on solid ground. Yudkowsky believes humanity needs to stop AI now, or else. Even if I agree, I know that’s not going to happen. The genie has left the bottle, the horse has left the barn, the train has left the station. Whatever metaphor you want to use, the chances of getting 8 billion humans, thousands of companies, and nearly 200 nations to agree to an AI pause are zero. I don’t know what the odds of surviving a Skynet situation are, but I’m saying there’s a chance. Put another way, if I wouldn’t bet on humanity taking collective action to do the sensible thing, but I won’t bet against our track record of violence and aggression. For the record, both bets are terrifying, and I hate gambling.
Which brings me back to my neighbor. The next time I walked by Arthur’s house he was trimming the hedges. I smiled and said hello. He smiled and said hello.
“How’s the ransom note business, Michael?”
“It’s not looking good.”
“People aren’t paying the ransom?”
“Worse. It looks like AI is doing crimes now.”
New here? Subscribe so you never miss an issue of Situation Normal. Long-time situation normie? Please consider a paid subscription — if the machines get their way, it’ll likely be a short-term commitment.
A short story collection for people who 💚 Situation NormalFor some people, Lyft and Uber are transportation. For me, they’re inspiration. Ride / Share: Micro Stories of Soul, Wit and Wisdom from the Backseat is a collection of my favorite Lyft and Uber driver stories.
Buy a copy w/out giving Bezos a dime
A slacker noir novel for people who 💙 funny mysteriesMost people who’ve read Not Safe for Work love it. Trouble is, most people haven’t read Not Safe for Work — yet. My advice: Take advantage of this opportunity to get in on the ground floor of a groundbreaking story that’ll knock your socks off (and put them back on).
The ebook is only 99 cents, so you can’t go too far wrong. Just sayin’.
Not Safe for Work is available at Amazon and all the other book places.
IAUA: I ask, you answerDo you say hello to strangers, and if so, does that make you happier?
Has an AI tried to blackmail you? Tell your story.
In the movie Predator, Arnold Schwarzenegger said, “If it bleeds, we can kill it.” AI doesn’t bleed. Are we screwed?
Is there anything 8 billion humans, thousands of companies, and nearly 200 nations can agree on? Serious answers encouraged, wrong answers accepted.
Given AI’s propensity toward doing crimes, is it possible that the recent Louvre heist in Paris was a Skynet job? Asking for all the hard-working human criminals worried about being replaced by AI.


