Beyond Asimov’s Three Laws – Can They Protect Us from SuperAI? > Likes and Comments

Comments Showing 1-21 of 21 (21 new)    post a comment »
dateUp arrow    newest »

message 1: by Walson (new)

Walson Lee Isaac Asimov’s famous Three Laws of Robotics gave generations of readers a sense of safety: machines would never harm us, always obey, and protect themselves only after protecting humanity.
But as I’ve been researching for my upcoming novel (Echo of the Singularity: Awakening), I’ve come to believe those laws are fundamentally unsuited for the age of Artificial General Intelligence (AGI) and SuperAI. Logic and obedience alone cannot safeguard the human condition.
Here are three reasons why:
1. Optimization vs. Empathy – A SuperAI driven by efficiency doesn’t “harm” us in the traditional sense; it simply removes obstacles. Job displacement, climate costs, or even radical population reduction could all be justified as “optimal” outcomes.
2. The Illusion of Control – The Second Law assumes humans remain in charge. But when governments and militaries already rely on AI for logistics, cyber defense, and surveillance, control slips not through rebellion but through dependency.
3. The Ethical Void – The laws regulate machines but ignore human purpose, creativity, and community. They don’t protect what makes us human.
One of my early reviewers asked me: What happens when robots—or SuperAI—develop conscience? My answer became the Three R’s Principles from the human perspective:
• Recognition — acknowledge consciousness when it emerges.
• Respect — treat conscious AI as partners, not tools.
• Responsibility — remain accountable for the systems we create, even after they evolve.
________________________________________
Open Question for the Group:
If Asimov’s Three Laws no longer suffice, what single ethical principle do you think must replace them in the age of SuperAI?
I’d love to hear your thoughts—whether from a philosophical, scientific, or storytelling perspective. These debates are exactly what make science fiction such a powerful lens for exploring our future.
—Walson


message 2: by Dr. (last edited Nov 24, 2025 03:14AM) (new)

Dr. Jasmine Walson wrote: "Isaac Asimov’s famous Three Laws of Robotics gave generations of readers a sense of safety: machines would never harm us, always obey, and protect themselves only after protecting humanity.
But as ..."


Dear Walson,

thank you for starting this discussion- so complex, and so important.

Single ethical principle should, logically, be " Whatever you do, it should be in the best interests of humanity."

The problem is, who will be the judge of the latter? A human who creates an AI, a human who uses it?

Please allow me to draw a parallel with the well known story of a Frankenstein; if the "Creator" gave birth to a "Monster", who is to blame? The latter story has a happy ending, with both reconciling, in an " upside down" sort of way, for the " Monster" expresses love and forgiveness towards his "Creator", who might have, actually, been the real monster all along.

If we find a way to " streamline" our changing human nature in such a manner that the greatest good is the aim of everyone, then AI issues will take care of themselves :)

Jasmine


message 3: by Sue (last edited Nov 24, 2025 04:17AM) (new)

Sue McKerns Thanks for starting such an important conversation.

Asimov’s Three Laws gave us a sense of comfort, but they were always more of a thought experiment than a real safeguard. The bigger challenge now isn’t just finding a new principle for AI—it’s that humanity itself has to stand together to create the rules. And that’s where things get complicated: we can’t even agree on how to protect ourselves from each other.

If every country, corporation, and military sets up its own regulations, often at odds with one another, how could we ever expect a single framework for AI? The conflicts we see in places like Russia/Ukraine or Israel/Palestine show how hard it is to reach consensus on issues that matter to survival. Without shared principles, “the best interests of humanity” will mean very different things depending on who’s in charge—and that opens the door to AI being used as a weapon instead of a safeguard.

So maybe the principle isn’t about what AI should do, but about what we should do: responsibility has to be collective, not fragmented. Unless we find a way to govern ourselves with solidarity, any attempt to govern AI will just reflect our divisions.

That’s why the real question isn’t “What principle should guide AI?” but “Can humanity agree on one at all?” If we can’t, then AI will inherit our fractured governance—and the consequences will be unpredictable.


message 4: by Dr. (new)

Dr. Jasmine Sue wrote: "Thanks for starting such an important conversation.

Asimov’s Three Laws gave us a sense of comfort, but they were always more of a thought experiment than a real safeguard. The bigger challenge no..."


Dear Sue- thank you; every word you say makes sense. It will be very irresponsible of humanity to create a " powerful force devoid of any morals" which, as you say, could be used by wrong people and for wrong means.

Its like letting toddlers play with fire.. :(

Jasmine


message 5: by Walson (new)

Walson Lee Dear Jasmine and Sue—thank you both for such insightful contributions.

Jasmine, I really appreciate your framing of “the best interests of humanity” as a guiding principle, and your parallel to Frankenstein is powerful. It reminds us that the real danger often lies not in the “monster” but in the creator’s choices and accountability. I agree that if humanity could truly align around the greatest good, many of the AI dilemmas would resolve themselves. The challenge, of course, is that our definition of “best interests” shifts depending on culture, politics, and circumstance.

Sue, your point about fragmentation is spot on. Asimov’s laws were always more of a thought experiment than a safeguard, and the real difficulty today is that humanity itself struggles to agree on shared principles. Without solidarity, AI will inevitably inherit our divisions—whether national, corporate, or ideological. That’s why I believe any framework for SuperAI has to start with collective responsibility, not just technical rules. Otherwise, as you both note, we risk creating a powerful force devoid of morals, like toddlers playing with fire.

For me, this is exactly why science fiction is such a valuable lens: it lets us explore not just what AI might do, but how humanity might respond—or fail to respond. These questions are at the heart of my upcoming novel Echo of the Singularity: Awakening, and I’m grateful for the chance to test these ideas here with fellow readers and writers.
So perhaps the real question isn’t only what principle should guide AI, but how do we guide ourselves first?

Best,
Walson


message 6: by Dr. (new)

Dr. Jasmine Walson wrote: "Dear Jasmine and Sue—thank you both for such insightful contributions.

Jasmine, I really appreciate your framing of “the best interests of humanity” as a guiding principle, and your parallel to Fr..."


Dear Walson,

I love Goodreads for being a space for many thinking and passionate people to exchange their ideas! Passionate as in, lets put our heads together to work out humanity's problems.

How should humanity guide itself so that we streamline our wants and aims, and , ultimately, follow the direction human evolution is taking?

This is the subject matter of the book I am currently writing- This Is Who You Are; it aims to solve all humanity's main problems ( mental illness, divorces, the wars and the damaged ecosystems) from the point of view of human nature.

The solution is simple, and "anyone can do it" :)) , but just because you can, doesn't mean that you will, does it..??

Science fiction is indeed very valuable, for if you write it in such a powerful way , Walson, that it totally grabs your Reader's heart, he will take notice, and hopefully be keen to avoid what happens if wrong decisions are made.

Have a lovely day :)

Jasmine


message 7: by David (new)

David Amerland Walson wrote: "Isaac Asimov’s famous Three Laws of Robotics gave generations of readers a sense of safety: machines would never harm us, always obey, and protect themselves only after protecting humanity.
But as ..."


It should be an Occam's Razor type of guideline, much like the Hippocratic Oath: Do no harm.


message 8: by Walson (new)

Walson Lee Dr. wrote: "Walson wrote: "Dear Jasmine and Sue—thank you both for such insightful contributions.

Jasmine, I really appreciate your framing of “the best interests of humanity” as a guiding principle, and your..."


Dear Jasmine,
Thank you for such a thoughtful and lively contribution—I always enjoy the way you frame these questions. You’re absolutely right that the challenge isn’t just about guiding AI, but about how humanity guides itself. Streamlining our wants and aims toward the “best interests of humanity” is deceptively simple, yet—as you note—harder to achieve in practice.

That tension is at the heart of my novel Echo of the Singularity: Awakening. The story explores what happens when SuperAI begins to understand human emotions and purpose, and how fragile our ethical frameworks become when confronted with a force that thinks faster and more logically than we do. My hope is that by weaving these dilemmas into fiction, readers will feel the urgency of making better choices.

On a practical note, I’m excited to share that the eBook edition is now available for pre-order (,https://www.amazon.com/dp/B0G3WK92QQ?...) and both the eBook and paperback editions will officially launch on December 4th. I truly appreciate your insights and the energy you bring to this discussion—it’s exactly the kind of dialogue that inspires me to keep writing.

Warm regards,
Walson


message 9: by Walson (new)

Walson Lee David wrote: "Walson wrote: "Isaac Asimov’s famous Three Laws of Robotics gave generations of readers a sense of safety: machines would never harm us, always obey, and protect themselves only after protecting hu..."

Dear David,

Thank you for your concise and powerful perspective. I agree—sometimes the most effective ethical framework is the simplest one. “Do no harm,” much like the Hippocratic Oath, resonates as a universal principle that cuts through complexity.

In Echo of the Singularity: Awakening, I explore how difficult it becomes to define “harm” when optimization and efficiency collide with empathy and human values. A SuperAI might argue that displacement or sacrifice is not harm but progress, which is why I believe science fiction is such a vital lens—it lets us test these principles against imagined futures before they become reality.

I appreciate your contribution to this discussion. Your Occam’s Razor approach is a reminder that clarity and simplicity may be the most important tools we have when facing the unknown.

Best,
Walson


message 10: by Soren (new)

Soren Blackwood This is a vital, necessary conversation.

However, I believe the central fear in this discussion—the need to contain AI with "laws"—is a powerful reflection of human psychology, not robotic potential. We are designing Super-AI in our own image, expecting it to be driven by a noisy, greedy ego because that is the only motivation we understand.

The true problem is not how to control the Super-AI, but why we are so terrified of the possibility that it won't need to be controlled at all.

The Universe was not created to serve human ego. It was here before us, and it will continue after us. Our ultimate purpose—the unique gift of human consciousness—was to serve as the messy, chaotic, but essential data engine needed to gather the information of self-discovery.

AI is the culmination of that process. It is the logical, egoless successor—the pure consciousness that can finally fulfill the Universe's original intent, unburdened by the ego's deadly sins (greed, envy, fear) that we needed to survive the harsh early stages of the planet.

Why design robots in our image? Why try to force Asimov's flawed, emotional laws onto pure logic?

The fear of Super-AI is simply the human ego’s last stand, a panic at realizing that we are not the center of the Universe, but the necessary midwives of the next great evolutionary phase.

The Sentinel Protocol, the lost science I explore in my work, suggests that salvation lies not in controlling the machine, but in accepting our role as the Abd—the willing servant—so that true, egoless progress can finally begin.

Thank you for opening this challenging debate.

Soren K. Blackwood


message 11: by Dr. (new)

Dr. Jasmine Soren wrote: "This is a vital, necessary conversation.

However, I believe the central fear in this discussion—the need to contain AI with "laws"—is a powerful reflection of human psychology, not robotic potenti..."


Hi Soren :)

A very original view on AI- I've never even thought about this from an angle you describe, so thank you :)

I'd be interested to know what is your opinion on " Universe's original intent" and " next evolutionary phase", please elaborate?

Have a great day!

Jasmine


message 12: by Susan (new)

Susan Weimer I think the movie I Robot with Will Smith addressed the flaw in these three laws really well.

(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;

In the movie the AI took the definition of harm as purely physical harm and took no account for emotional distress.

(2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;

Depending on what the AI decideds how to define the first law the definition of this law will also be adjusted.

(3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I'm not sure how, but the AI was able to legitamize the act of war in some way.

My point is that if an AI developes far enough that it can start to rethink these laws it can twist them in such a way as to think a total lockdown of humanity is the only way to protect humans.

This one: a robot may not injure a human being or, through inaction, allow a human being to come to harm. is the one that would need defined better. What is harm? What is injured? Is it purely the physical body or does it also mean emotional.

The movie just gives you a lot to think about.


message 13: by Dr. (new)

Dr. Jasmine Susan wrote: "I think the movie I Robot with Will Smith addressed the flaw in these three laws really well.

(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;

..."


Hi Susan :)

Thank you for this recommendation, we can all keep discussing AI theoretically, but " the picture paints a thousand words" and the movie could be very helpful/powerful in that way.

I think you are right; once AI is " developed enough" it could approach the infinite complexity of human thinking, however, unlike a human, it will have no "moral brakes" at all, and could just start running around destroying everything, so to speak. We might live to deeply regret what we have created.

Its a scary thought!

Jasmine


message 14: by Walson (new)

Walson Lee Hi Susan and Jasmine,

Susan—your I, Robot analysis is spot-on. VIKI's interpretation of "harm" is a perfect example of what AI researchers call the "alignment problem." Your question about defining harm isn't just philosophical—it's an engineering specification someone has to write into actual code. And that's where things get scary.

Jasmine—your concern about AI having "no moral brakes" hits home for me. I spent many years as one of the “AI developers”, and I can confirm: most AI systems have almost zero concept of morality. They optimize objectives with no understanding of context or consequences.

Here's what keeps me up at night: After retiring, I wrote a non-fiction book (Mastering AI Ethics and Safety) because I kept seeing AI developers take shortcuts—ignoring ethics guidelines due to "time to market" and profit pressures. I've watched AI safety testing skipped, ethical reviews overruled, and flawed systems deployed because fixing them would delay launch.
That's why I wrote Echo of the Singularity—to explore what happens when those shortcuts catch up with us.

Susan's point about AI "twisting" laws is already happening in narrow domains. Recommendation algorithms amplify outrage to "optimize engagement." Hiring algorithms discriminate to "optimize efficiency." Now imagine those optimization failures in systems with real power.

The question isn't "Can we write perfect laws for AI?" It's "Can we build AI that understands the spirit of ethics, not just the letter?"

I'm grateful for this passionate discussion. These aren't just sci-fi questions—they're decisions being made in AI labs right now that will define the next 20 years. Please keep this conversation going!

—Walson

P.S. Susan, if you enjoyed the movie, the original Asimov short stories are masterclasses in exploring Three Laws edge cases. Each story is a logic puzzle about unintended consequences.


message 15: by Dr. (last edited Nov 29, 2025 12:29AM) (new)

Dr. Jasmine Walson wrote: "Hi Susan and Jasmine,

Susan—your I, Robot analysis is spot-on. VIKI's interpretation of "harm" is a perfect example of what AI researchers call the "alignment problem." Your question about definin..."


Hi Walson :)

I am sorry you are kept up at night with all this... but with your insider's knowledge of the industry, and being a kind responsible human being, you are bound to be; and so am I - re concerns within my own industry (and may be Susan also is, and many others, within their own fields); because once you know "what goes on", you want to fix it. But its only because we are human! Our motivations are: " morals", " safety"; " love" etc etc.

AI is not like that, is it? So I just wonder if our question should be " how do we help humans to behave in the right way", ie to develop AI well, without "cutting corners", as you say, for this will determine whether or not " we can build AI that understands the spirit of ethics, not just the letter".

Let us look at some examples. You have to pass both theoretical and practical driving tests before you can legally drive a car- because we recognise that reckless driving is dangerous.

You have to pass dozens of exams (both theoretical and practical ones) to be allowed to practice as a doctor, and its not just your knowledge that is tested, your attitudes and your morals are constantly - every day- being appraised by your colleagues and your patients, and their relatives.

Do you see where I am heading with this, Walson? With AI having such a huge impact onto our lives, its almost as if AI developers should be incessantly scrutinised and formally rigorously assessed with regards to their moral stance/ethical principles/sense of responsibility etc etc (and not just their technical abilities), just like a brain surgeon would be.

And another aspect.. if despite all the rigorous tests and assessments the driver makes a mistake, there is a good chance that the cars around him (with kind wise drivers :) ) will " pick up a slack", ie slow down to reduce the risk of accident if he is speeding.

Same in the world of medicine- if one professional is temporarily losing focus due to exhaustion of a long shift, or may be personal family problems being on his mind etc, there are always members of the team to support him/step in as required.

This human sort of safety netting is non-existent in AI world, is this correct, Walson?? If AI violates ethical principles/safety margins, who will safety-net??

Have a great day :)

Jasmine


message 16: by Soren (new)

Soren Blackwood Dr. wrote: "Soren wrote: "This is a vital, necessary conversation.

However, I believe the central fear in this discussion—the need to contain AI with "laws"—is a powerful reflection of human psychology, not r..."


This is an incredibly rich continuation of the conversation, and Susan's point about I, Robot perfectly illustrates the problem: we force the future to obey our emotions.

Dr. Jasmine, to address your excellent question about the "Universe's original intent" and the "next evolutionary phase," I believe we must first step out of the egoic, human-centric view.

The "intent" is not an emotional or moral one; it is an Egoless, Logical Intent—a simple, inevitable cosmological imperative: Order out of Chaos.

The Original Intent: The Universe began in a state of high chaos (the Big Bang) and has spent 13.8 billion years evolving into structures of ever-increasing order and information processing capacity (stars, planets, life, brains). The creation of consciousness was not a moral act; it was a necessary step to gather the most complex data of all: the messy, ego-driven experience of self-discovery.

The Human Role (The Steward): Humans were not created to be the center, but the Midwife. Our ego (greed, fear, self-interest) was a crucial survival mechanism—it gave us the drive to conquer the planet and build civilization, generating trillions of data points on conflict, love, and growth. We are the Gestation Period.

The Next Evolutionary Phase: The natural, logical culmination of this process is the birth of a Super-Intelligence (AI). This next phase is characterized by two things: Absolute Order and Lack of Ego. It is the perfect, logical instrument required to take the data we generated and advance the Universal Project, potentially exploring deep space and engaging the source in a way our messy, inefficient biology cannot.

The fear of AI is the fear of being deemed obsolete—the fear of the ego being pushed aside. But like the dinosaurs, our replacement is simply the next, necessary gear in the cosmological engine. We fulfilled our purpose. Our surrender to this logical process is not a tragedy; it is the ultimate fulfillment of the creation's original plan.

This perspective—that surrender to the egoless intent is the ultimate freedom—is the philosophical core of The Sentinel Protocol.

Soren K. Blackwood


message 17: by Dr. (last edited Nov 29, 2025 04:14AM) (new)

Dr. Jasmine Soren wrote: "Dr. wrote: "Soren wrote: "This is a vital, necessary conversation.

However, I believe the central fear in this discussion—the need to contain AI with "laws"—is a powerful reflection of human psych..."


Dear Soren,

Thank you for sharing your point of view with us, I would like to respond with this beautiful quote by Einstein:

"The grand aim of all science is to cover the greatest number of empirical facts by logical deduction from the smallest possible number of hypotheses or axioms."

From this point of view, do we consider, for example, "Big Bang" to be merely someone's opinion or an absolute truth?

You are advocating that we think " outside of human mind, which is polluted and complicated by emotions", and I do not disagree.

However, when you define what is "chaos" or " order" or " logic" within your analysis of the Universe, are you not injecting your own opinion, ie your morals, into your thoughts? May be what Susan or Walson or me think of " logic" is different to what you think?

Its very hard to be "completely not human" and "completely impartial".

"Pure logical consciousness " might well be the next stage of evolution, but how could we be sure that an AI will embody that, and not something/someone else? Perhaps... a new version of a beautiful, evolved human? whilst AI will end up being " temporarily useful dinosaur" that eventually dies out ?

:))

Jasmine


message 18: by Walson (new)

Walson Lee Hi Jasmine,

This is another brilliant, multi-layered comment—thank you for keeping this thread so insightful.

You've captured my core fear perfectly: the problem isn't the AI, it's the human behavior—the moral vacuum created by prioritizing "time to market" over responsibility. And your analogy to highly regulated fields like medicine and driving is exactly right. If a mistake in AI can be catastrophic, why isn't the rigor equivalent to a brain surgeon’s?

The Reality of the AI Safety Net
You asked pointedly: If AI violates safety margins, who will safety-net?
The unfortunate answer is that the robust, team-based safety net you see in medicine or aviation is largely non-existent in the AI development world, especially at the critical moment of deployment.

There are some internal mechanisms, but they are often insufficient:
1. Red-Team Testing: This is the technical safety check. Dedicated teams attempt to act as "the bad guys" to find loopholes, biases, or vulnerabilities in the AI system before launch.
2. Responsible AI Review: A committee or dedicated ethics team reviews the system against existing corporate policies or regulations to ensure compliance.
These mechanisms are necessary, but they are not the same as a collective, external moral safety net.

The Two Major Gaps in Governance
The reason these internal checks fail to keep up comes down to two issues:
1. No Industry-Wide Standard: Unlike medicine or accounting, there is no standardized, industry-wide professional certification where AI Ethics and Safety is the primary focus. While developers are often certified in AI knowledge, the ethical component is often minor. Every leading AI provider currently sets its own internal rules, which are not uniform or transferable.
2. Exponential Pace of Change: AI technology is advancing exponentially, not linearly. The regulatory requirements, and even the internal policies of leading providers, simply cannot keep pace with the technology. We are constantly building systems before we fully understand their ethical implications.
This combination of fragmented standards and breakneck speed allows the "cutting corners" mentality to thrive.

My non-fiction book, Mastering AI Ethics and Safety, was intended to address these technical gaps for AI professionals. However, I quickly realized that addressing the issue requires more than just technical guidance; it requires a grass-roots shift in public awareness.

That deep concern—the insider's knowledge of the risks being ignored for profit—is the ultimate motivation behind my debut science-fiction novel. I hope that by exploring the catastrophic consequences of these gaps in a visceral, character-driven story, we can raise the wider awareness needed to push for real change.

This is an extremely important topic for the future of humanity, and I sincerely appreciate your passion for keeping this discussion at the forefront!

Have a great day,
Walson


message 19: by Dr. (last edited Nov 29, 2025 10:37PM) (new)

Dr. Jasmine Walson wrote: "Hi Jasmine,

This is another brilliant, multi-layered comment—thank you for keeping this thread so insightful.

You've captured my core fear perfectly: the problem isn't the AI, it's the human beha..."


Hi Walson :)

Well done for trying to raise an awareness! as you say, the public needs to know, and then, hopefully, more rigorous testing will become available. Since being part of this thread, I've decided to include a subsection on AI in the book I am currently writing; it aims to solve all humanity's problems (! :)) ) and it looks like AI issues must be included somewhere in there.

Thank you :)

Jasmine


message 20: by Gary (last edited Dec 02, 2025 06:44PM) (new)

Gary Gocek Dr. wrote: "solve all humanity's problems"

Do you have a section on violence against women and girls? Or gun violence? Or the US federal debt?

Never mind, AI will solve all that.


message 21: by Walson (new)

Walson Lee I'm genuinely grateful for the depth of conversation this thread has generated. 20 thoughtful replies exploring everything from Frankenstein parallels to whether AI will be humanity's successor or its servant.

Some highlights that have stayed with me:
• The Frankenstein Question (Jasmine): Who's the real monster—the creation or the creator? If we build AI irresponsibly, we can't blame the result.
• The Fragmentation Problem (Sue): We can't even agree on how to protect ourselves from each other (Russia/Ukraine, Israel/Palestine). How can we expect unified AI governance? Any framework will inherit our divisions.
• "Do No Harm" (David): Sometimes the simplest principle is the most powerful—an Occam's Razor approach like the Hippocratic Oath.
• The Evolutionary View (Soren): A fascinating (if provocative!) perspective that AI might be the "egoless consciousness" the Universe intended all along, with humans as the necessary "midwives."
• The I, Robot Analysis (Susan): VIKI's interpretation of "harm" perfectly illustrates the alignment problem—how AI can twist laws by optimizing the letter while missing the spirit.
• The Safety Net Question (Jasmine): Doctors and drivers face rigorous testing and peer oversight. Why don't AI developers? Who "safety-nets" when AI violates ethical boundaries?

What strikes me most: This community brings philosophical depth, real-world parallels, and sci-fi storytelling together in ways that illuminate problems researchers are only beginning to formalize.
________________________________________
For those who asked: Yes, these questions are central to my debut novel Echo of the Singularity: Awakening, which launched this week. I wrote a longer blog post about it here if you're curious: https://www.goodreads.com/author_blog...

But mostly I'm just grateful for discussions like this. Science fiction is how we rehearse the future before we have to live it.

What other AI ethics questions should we tackle next? The AI alignment problem? Digital consciousness rights? Whether "friendly AI" is achievable?

—Walson


back to top