Jump to ratings and reviews
Rate this book

Artificial Intelligence: Can We Avoid a Digital Apocalypse?

Rate this book
If you say something about artificial intelligence (AI) and your concerns about it, well this is a very interesting topic. The question to how to build artificial intelligence that isn't going to destroy us is something not only I began to pay attention too, a rather deep and consequential problem. I went to a conference in Puerto Rico to focus on this issue organized by The Future of Life Institute and I was brought there by a friend, Elon Musk who undoubted many of you have heard of. And Elon recently said publicly that he thought AI was the greatest threat to human survival here, perhaps greater than nuclear weapons and many people took that as an incredibly hyperbolic statement.

Now knowing Elon and how close to the details he's apt to be, I took it as very interesting diagnosis of the problem. But I wasn't quite sure what I thought about it, I haven't really spent a much time focusing on progress you could make with AI and its implications. I went to this conference in San Juan held by 4 other people who were closest to doing this work, this was not open to the public. There was 1, maybe 2 or 3 interlopers there who just hadn't been invited who got themselves invited and what was fascinating about that was that the collection of people who were very worried like Elon and others who felt that this was something to pull the brakes, even though that seemed somewhat hopeless, to the people who were doing work most energetically and most wanted to convince others not to worry about having to pull the brakes. And what was interesting there is what I heard outside this conference, once you hear say on edge.org or general discussions about the prospects making real breakthroughs in artificial intelligence. You hear a time frame of 50-100 years before anything terribly scary or terribly interesting going to happen.

In this conference, that was almost never the case. Everyone who is still trying to ensure doing this as safely as possible was still conceding that a time frame of 5-10 years admitted of rather alarming progress. And when I came back from that conference, the edge question for 2015 just happened to be on the topic of artificial intelligence, so I wrote a short piece, distilling what my view now was.

Perhaps I'll just read that, it won't take too long and hopefully it won't bore you.

"Can we avoid a digital apocalypse? It seems increasingly likely that we will one day build machines that have super human intelligence. When you only continue to produce better computers which we will unless we destroy ourselves or meet our end some other way. We already know that it's possible for mere matter to acquire a quote, general intelligence.

The ability to learn new concepts, and employ them in unfamiliar context. Because the 1200 cubic centimeters (cc's) of salty porridge inside our heads is manageable. There's no reason to believe that a suitably advanced digital computer couldn't do the same. It's often said that the near term goal is to build a machine that possesses, quote, human level intelligence but unless we specifically emulate a human brain, with all of it's limitations, this is a false goal. The computer in which I'm writing these words, already possesses super human powers of memory and calculation, it also has potential access to most of the world's information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence known as "AGI" will exceed human performance on every task which is considered a source of intelligence in the first place.

Whether such a machine would necessarily be conscious is an open question, but conscious or not, an AGI might very well develop goals in compatible with our own. Just how sudden and lethal this parting of the ways might be is not a subject of much colorful speculation.

21 pages, Kindle Edition

Published November 25, 2018

1 person is currently reading
31 people want to read

About the author

Sam Harris

61 books9,015 followers
Librarian Note:
There is more than one author in the Goodreads database with this name.


Sam Harris (born 1967) is an American non-fiction writer, philosopher and neuroscientist. He is the author of The End of Faith: Religion, Terror and the Future of Reason (2004), which won the 2005 PEN/Martha Albrand Award, and Letter to a Christian Nation (2006), a rejoinder to the criticism his first book attracted. His new book, The Moral Landscape, explores how science might determine human values.

After coming under intense criticism in response to his attacks on dogmatic religious belief, Harris is cautious about revealing details of his personal life and history. He has said that he was raised by a Jewish mother and a Quaker father, and he told Newsweek that as a child, he "declined to be bar mitzvahed." He attended Stanford University as an English major, but dropped out of school following a life-altering experience with MDMA. During this period he studied Buddhism and meditation, and claims to have read hundreds of books on religion. In an August 21, 2009 appearance on Real Time with Bill Maher, Harris stated that he grew up in a secular home and his parents never discussed God. He has stated, however, that he has always had an interest in religion.

After eleven years, he returned to Stanford and completed a bachelor of arts degree in philosophy. In 2009, he obtained his Ph.D. degree in neuroscience at University of California, Los Angeles, using functional magnetic resonance imaging to conduct research into the neural basis of belief, disbelief, and uncertainty.

-source, plus more info

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
1 (33%)
3 stars
1 (33%)
2 stars
0 (0%)
1 star
1 (33%)
No one has reviewed this book yet.

Can't find what you're looking for?

Get help and learn more about the design.