(?)
Quotes are added by the Goodreads community and are not verified by Goodreads. (Learn more)
Nick Bostrom

“The traditional illustration of the direct rule-based approach is the “three laws of robotics” concept, formulated by science fiction author Isaac Asimov in a short story published in 1942.22 The three laws were: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Embarrassingly for our species, Asimov’s laws remained state-of-the-art for over half a century: this despite obvious problems with the approach, some of which are explored in Asimov’s own writings (Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories).23 Bertrand Russell, who spent many years working on the foundations of mathematics, once remarked that “everything is vague to a degree you do not realize till you have tried to make it precise.”24 Russell’s dictum applies in spades to the direct specification approach. Consider, for example, how one might explicate Asimov’s first law. Does it mean that the robot should minimize the probability of any human being coming to harm? In that case the other laws become otiose since it is always possible for the AI to take some action that would have at least some microscopic effect on the probability of a human being coming to harm. How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate. Perhaps”

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
Read more quotes from Nick Bostrom


Share this quote:
Share on Twitter

Friends Who Liked This Quote

To see what your friends thought of this quote, please sign up!

0 likes
All Members Who Liked This Quote

None yet!


This Quote Is From

Superintelligence: Paths, Dangers, Strategies Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
20,697 ratings, average rating, 1,957 reviews
Open Preview

Browse By Tag