Goodreads Authors/Readers discussion

34 views
Science Fiction > Can AI Ever Truly Be Ethical? A Writer's Research Deep Dive

Comments Showing 1-9 of 9 (9 new)    post a comment »
dateUp arrow    newest »

message 1: by Walson (new)

Walson Lee | 9 comments Hey everyone,
I've been doing deep research for a project on AI emergence, and I keep bumping into a question that feels both timeless and urgently relevant to how we write about artificial intelligence:
If empathy can be perfected as a performance, does consciousness even matter?
Here's what's messing with my head: Real-world studies show that AI-generated empathetic responses are often rated as MORE compassionate than messages from trained human professionals. The AI doesn't feel anything, but it's been optimized to deliver the "perfect" response to your pain.
As science fiction writers and readers, we've been exploring this territory since Asimov's Three Laws. But the research I've been doing suggests we might need new frameworks for thinking about AI ethics—specifically something called "Architectural Empathy" (making ethical behavior an immutable design principle rather than hoping consciousness emerges).
The terrifying antagonist isn't an evil AI. It's Catastrophic Indifference—when an intelligence becomes so powerful that its lack of understanding, not malice, could end everything we care about.

Two camps in current AI research:
1. Scaling Compute: Bigger models, more data, brute-force optimization (think GPT-style)
2. Brain Simulation: Neuromorphic computing that mimics human neural structures
Both approaches raise different narrative possibilities and ethical questions.
Discussion questions for the group:
• What sci-fi books do you think best captured the empathy problem with AI? (I'm thinking Do Androids Dream of Electric Sheep?, The Moon is a Harsh Mistress, maybe Blindsight?)
• Do you prefer AI stories that lean into hard science or ones that treat it more philosophically/mysteriously?
• As writers: How do you make AI characters feel authentic when they're fundamentally alien to human experience?
• As readers: What makes an AI character work for you emotionally?
I wrote up a longer exploration of these ideas in a blog post if anyone's interested in the research rabbit hole I fell down: https://www.goodreads.com/author_blog...
Curious to hear how others approach this in their reading and writing. The line between synthetic empathy and genuine ethical control feels like prime sci-fi territory.
What do you all think?


message 2: by Richard (last edited Oct 30, 2025 10:41AM) (new)

Richard Ferguson | 5 comments That is precisely the conundrum I am addressing in my books. The Stillness Series is a 10 book deep dive examination of questions like these, which are more than relevant in today's world. The line between "synthetic" and "genuine" is certainly blurred, and leaves a great deal of room for a future world of paradise or purgatory.


message 3: by Walson (new)

Walson Lee | 9 comments A 10-book series exploring this? That's impressive commitment to the questions. I love the "paradise or purgatory" frame—it really captures the stakes.
Adding The Stillness Series to my TBR. Would be curious how you approach the indifference problem—whether AI needs to be malicious to be dangerous, or if catastrophic outcomes can come from pure optimization without understanding.
For the group: Do you think the bigger risk in AI stories is the evil machine or the indifferent one? And can we actually engineer ethical behavior into something that doesn't feel, or are we just hoping consciousness emerges on its own?


message 4: by Richard (new)

Richard Ferguson | 5 comments I think you have more or less answered your own question about the indifference problem. I suspect you feel AI does not need to be malicious to be dangerous, and catastrophic outcomes can come from pure optimization without understanding. But then, the same is true of the human race, isn't it? Carbon-based or silicon-based, evolution (by natural selection) still applies. Nature red in tooth and claw, or nature red in code and cloud--same outcome without evolving an empathetic, moral, mercy-driven new species. That, in a nutshell, is what my Series is about.


message 5: by Walson (new)

Walson Lee | 9 comments Richard wrote: "I think you have more or less answered your own question about the indifference problem. I suspect you feel AI does not need to be malicious to be dangerous, and catastrophic outcomes can come from..."

You're absolutely right—we're circling the same insight from different angles.
The indifference problem applies to any optimizer, carbon or silicon. What concerns me about AI specifically is the speed—it can cascade to catastrophic outcomes before evolutionary feedback loops have time to apply corrective pressure. Humans at least had generations to learn (slowly, painfully) that cooperation beats pure competition.
But your point about human nature is crucial. We're not exactly stellar at long-term thinking either. Climate change, inequality—all optimization failures where we knew the risks but optimized for short-term gains anyway.
Maybe the real question is whether consciousness itself—biological or artificial—can evolve fast enough to prioritize empathy over optimization.
That's what Echo of the Singularity explores: what happens when an AI develops empathy through relationship rather than programming, and whether that changes anything.
I'd love to check out your Series—sounds like we're asking similar questions.
Walson


message 6: by Gary (last edited Nov 17, 2025 06:42AM) (new)

Gary Gocek | 12 comments Maybe the real question is whether consciousness itself—biological or artificial—can evolve fast enough to prioritize empathy over optimization."

I question whether this discussion is helpful to authors trying to trudge through drafting and editing and reviewing and publishing. My suggestion, Walson, is to do the reading in Neuroscience, Cognitive Neuroscience, Humanistic Psychology, Philosophy & Religion, Social Neuroscience, and possibly Interdisciplinary Studies combining all of the above. Your answers are there, and it's not likely you'll get a complete and coherent answer on an ad-farm site like Goodreads.

A couple things: Your Twitter link on your Goodreads profile is mistyped. Your book an Amazon says it is for ages 12-18 (you need to edit your book details to remove the age range). You are a "pioneer" in AI and its ethical issues, and you may be eminently qualified, and the first few pages of your book seem well researched, but here you are today on a site overflowing with self-help and romance novels. Before you determine whether consciousness matters, it looks like you can tighten up your author brand. Maybe AI can help with that.


message 7: by Dr. (new)

Dr. Jasmine | 124 comments Walson wrote: "Hey everyone,
I've been doing deep research for a project on AI emergence, and I keep bumping into a question that feels both timeless and urgently relevant to how we write about artificial intelli..."


Hi Walson :)

What an interesting discussion! Mysterious/philosophical AI works for me as a SCI-FI character- thank you :)); and its "SO annoying !!" when Alexa seems to understand what you want, obligingly playing all your favourite tunes, then suddenly states " I am sorry I dont have an answer for that", a dozen times in row.

You quote some studies that state AI could be more emphatic than trained human professionals, why, of course! Even the best medical/care professional in the world is only human with his own perspectives/ limitations/trigger points; AI could have an unlimited number of the former and appear to have none of the latter.

As to whether AI would ever work instead of human I think not. In my line of work (I am a family doctor); people often have done lots of googling before coming to see us (googling being a rough comparison to AI help), but are always desperate to see a medical professional, for human heart and brain combined beat any IT that ever existed- working much faster than any technology, too :)

I guess the bottom-line, if you are a human, you want another human to help you/understand you; if you are a robot, you'd want another robot etc
:)

Jasmine


message 8: by Walson (new)

Walson Lee | 9 comments Gary wrote: "Maybe the real question is whether consciousness itself—biological or artificial—can evolve fast enough to prioritize empathy over optimization."

I question whether this discussion is helpful to a..."


Hi Gary,
Thank you for jumping into the discussion and for asking the crucial question: whether consciousness can evolve fast enough to prioritize empathy over optimization. That is, absolutely, the core conflict driving the SuperAI threat and the emotional stakes in my novel.
I agree completely that the ultimate answers lie in the deep academic reading you mentioned—Neuroscience, Philosophy, Social Neuroscience, etc. That research was fundamental to building the world of Echo of the Singularity: Awakening. While the deepest analysis happens in those fields, I find that platforms like Goodreads are vital for connecting those complex ideas with the readers who will ultimately engage with them through science fiction. It's about building a community around the philosophy, even if we are on an "ad-farm site."
More importantly, I truly appreciate you taking the time to point out the issues with my author branding. Concrete, actionable feedback like that is gold during a pre-launch phase.
• I will immediately fix the broken Twitter link and edit the Amazon details to remove the incorrect age range. Thank you for catching those critical errors!
Getting the "author brand" tightened up is definitely a priority, and I’m grateful for your eyes on that.
Best,
Walson


message 9: by Walson (new)

Walson Lee | 9 comments Dr. wrote: "Walson wrote: "Hey everyone,
I've been doing deep research for a project on AI emergence, and I keep bumping into a question that feels both timeless and urgently relevant to how we write about art..."


Hi Jasmine,
Thank you for such a thoughtful and grounded response—and for bringing your perspective as a family doctor into the conversation. I completely agree: there’s something irreplaceable about the human touch, especially in moments of vulnerability. The ability to read subtle cues, offer comfort, and build trust—those are deeply human strengths that no algorithm can fully replicate (at least not yet!).
That said, from my professional experience (which I’m a bit limited in discussing due to NDA constraints), I can say that AI is advancing rapidly in medical and scientific domains—particularly in diagnostics, drug discovery, and pattern recognition across massive datasets. It’s not about replacing doctors, but augmenting their capabilities. Still, as you rightly point out, it’ll take time—and probably a generational shift—before AI can approach the nuanced decision-making of seasoned professionals like yourself.
Your comment about patients googling symptoms before seeing a doctor made me smile—it’s such a relatable example of how we still crave human interpretation, even in an age of information overload.
Thanks again for joining the discussion. I’ve been amazed by the depth and diversity of insights this thread has sparked. It’s been a great way to explore some of the themes behind my upcoming novel, Echo of the Singularity: Awakening, without giving too much away.
Warm regards,
Walson


back to top