Jeffrey Dallatezza

Add friend
Sign in to Goodreads to learn more about Jeffrey.


A Promised Land
Jeffrey Dallatezza is currently reading
by Barack Obama (Goodreads Author)
bookshelves: currently-reading
Rate this book
Clear rating

 
Capital in the Tw...
Rate this book
Clear rating

 
Book cover for On Anarchism
With the exception of a few shared mythologies about our founding slaveholders and our most murderous wars, we like to imagine that everything we do is being done for the very first time. Such amnesia can be useful, because it lends a ...more
Loading...
Nick Bostrom
“The traditional illustration of the direct rule-based approach is the “three laws of robotics” concept, formulated by science fiction author Isaac Asimov in a short story published in 1942.22 The three laws were: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Embarrassingly for our species, Asimov’s laws remained state-of-the-art for over half a century: this despite obvious problems with the approach, some of which are explored in Asimov’s own writings (Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories).23 Bertrand Russell, who spent many years working on the foundations of mathematics, once remarked that “everything is vague to a degree you do not realize till you have tried to make it precise.”24 Russell’s dictum applies in spades to the direct specification approach. Consider, for example, how one might explicate Asimov’s first law. Does it mean that the robot should minimize the probability of any human being coming to harm? In that case the other laws become otiose since it is always possible for the AI to take some action that would have at least some microscopic effect on the probability of a human being coming to harm. How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate. Perhaps”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

year in books
Natalie...
1,281 books | 532 friends

Diana Chen
362 books | 22 friends

Kate T
2,291 books | 183 friends

Jade Huang
572 books | 99 friends

Vamsi C...
352 books | 148 friends

Jacob W...
460 books | 13 friends

Sarenka...
27 books | 23 friends

Hannah ...
27 books | 13 friends

More friends…

Favorite Genres



Polls voted on by Jeffrey

Lists liked by Jeffrey