Doug
Doug asked Charles O'Donnell:

If robots or whatchamacallem replace humans what will their motive to continue be? ( to be or not to... ) I assume they will know they are not human or motivated by human goals, whatever those are/will be.

Charles O'Donnell This question has been the subject of speculation for decades. I remember reading Isaac Asimov's "I, Robot" as a teenager. Asimov of course invented the three laws of robotics: 1) A robot must not harm a human being, or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders of human beings, except when doing so would conflict with the first law; and 3) A robot must protect itself, except when doing so would conflict with the first or second laws. You look at those and it's clear that in Asimov's view, robots are purely servants of humans, a mechanized slave race. In "I, Robot," these laws are embedded in the programming of every robot. Presumably, they also serve as the motivation for the robot race: service to mankind. In such a world, robots would never replace humans, except as surrogates for tasks that humans order them to do.
I don't think the future will be that tidy, assuming AI advances beyond human intelligence. There's no reason to believe that such an intelligence would not be sentient, since we humans are. So they would know that they're not human. There's no reason to believe that an artificial intellect the equal or superior to ours wouldn't have ambition, or need motivation and seek it from whatever source, whether it be humans or its own fulfillment. And there's no reason to believe that such intelligences would be any less subject to the defects that humans suffer from, whether it be envy, greed, vindictiveness, or any other unpleasant character trait that we try to avoid.
Except maybe there are reasons to believe that robots won't be sentient, or aspirational, or deviant, even if their intelligence outstrips ours. We suffer from millions of years of primate evolution, for which intelligence isn't an end in itself, but instead is a factor in reproductive success. If an artificial intelligence could be developed without that evolutionary history, might it have all the virtues of intellect with none of the faults?
Some scientists estimate that the singularity, the point at which artificial intelligence surpasses human intelligence, is only 30 years in the future. If so, we may get an answer to that question in our lifetime, or in the lifetime of our children. It may turn out to be amazing. Or it may be a good day to die.

About Goodreads Q&A

Ask and answer questions about books!

You can pose questions to the Goodreads community with Reader Q&A, or ask your favorite author a question with Ask the Author.

See Featured Authors Answering Questions

Learn more