Jump to ratings and reviews
Rate this book

Facing the Intelligence Explosion

Rate this book
Sometime this century, machines will surpass human levels of intelligence and ability. This event—the “intelligence explosion”—will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.

Luminaries from Alan Turing and I. J. Good to Bill Joy and Stephen Hawking have warned us about this. Why do I think Hawking and company are right, and what can we do about it?

Facing the Intelligence Explosion is my attempt to answer these questions.

91 pages, Kindle Edition

First published January 1, 2013

12 people are currently reading
294 people want to read

About the author

Luke Muehlhauser

6 books12 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
50 (27%)
4 stars
73 (39%)
3 stars
43 (23%)
2 stars
16 (8%)
1 star
1 (<1%)
Displaying 1 - 18 of 18 reviews
Profile Image for Miles.
511 reviews182 followers
January 30, 2016
This is a useful but brief primer on critical thinking as it relates to the future of technology and the risks/potentials of creating superintelligent AIs. It's the kind of text I wish I'd read when I was first becoming interested in these elements of modern life, instead of after I'd already begun to dig into the popular literature surrounding the transhuman and futurist movements. Muehlhauser seems like a very smart guy, and I appreciate how his tiny book works very hard to make some pretty complex ideas available to laypeople. However, the text also suffers from some instances of oversimplification.

One example of this is the way Muehlhauser frames the problem of using instrumental rationality to achieve desirable human goals. He repeatedly invokes the concept of "winning" in order to describe an instance of successfully reaching a goal. He probably does this in an attempt to make his arguments accessible, but in this case with the unfortunate side-effect of bringing in some unhelpful connotations that accompany of the idea of victory, especially as it exists in our modern American, ultra-capitalist culture. For someone who has obviously read up on the power of psychological framing, I'm surprised that Muehlhauser didn't realize that the notion of "winning," in almost all instances, automatically generates the underlying assumption that someone (or something) has simultaneously "lost." It's perpetuating the vicious myth that all aspects of life and progress are zero-sum games, rather than working to expose the truth that zero-sum games are largely a human construct. My problem with this isn't that competition and succession aren't a natural part of biological (and technological) development, because they are. Rather, it's that Muehlhauser fails to also acknowledge that cooperation and symbiosis occupy an equally important seat at the evolutionary table. He focuses too much energy on proposing ways that we can "control" AI and "make it safe" rather than proposing that we try to create AIs that won't want a world in which human values and needs are rendered entirely obsolete. So while I believe that Muehlhauser is very much committed to precisely this sort of outcome, I think this book can leave readers with the feeling that either we "win" or AIs do, without much room for collaboration or middle ground. Frustratingly, this whole issue could have been avoided by including a brief yet nuanced discussion of how the concept of "winning" isn't very useful and reframing the AI challenge in terms of prosperous, symbiotic commerce between man and machine (positive-sum instead of zero-sum). The book's last chapter leads me to believe that this is exactly what Muehlhauser is trying to argue for, but the message didn't come across clearly enough in my opinion.

While I can't blame Muehlhauser for the invention of the concept, I have to say that I really hate the central role that "utility functions" are coming to play in discussions about the future of humanity. This idea always leads me down the road of thinking that technology enthusiasts want to live in a world in which every element of sentient experience can be readily quantified. So we start slapping numbers on all our habits and activities, assigning them values that supposedly communicate their "utility" in terms of how they contribute or detract from the achievment of various goals. I won't argue that such a paradigm is physically or statistically impossible, but I will argue that I find it personally detestable. I don't want to live in a world where I'm constantly being reminded that I'm "wasting" my time doing things I enjoy simply because they might not contribute to what "goals" I am personally trying to reach, or that my community has chosen (either with our without my consent). That kind of life just seems exhausting and experientially impoverished. So I accept that utility functions are useful within a very limited scope, but I don't want them to play a dominant role in human life because I think that would cause us to lose sight of the raw value of habitual, enjoyable experiences, which are necessarily individualized and nuanced in ways that I think will always, at least in some way, defy quantification.

I don't want my criticisms to indicate that I think this is a bad or poorly conceived book. In the final analysis, I have to applaud Muehlhauser's slim text for convincing me thoroughly that we are not ready to give birth to AI with any assurance that it won't wipe us out with terrifying expedience. I found Muehlhauser's arguments concerning the fragility of human value systems and the difficulty of programming AIs to share human values extremely convincing and more than a bit distressing. The fact that our ability to create powerful AIs is vastly outstripping our ability to control their influence or render human life "necessary" rather than simply exploitable in the eyes of technological superintelligence is probably the second most important problem humanity faces in the 21st century, next to the threat of climate change (Muehlhauser often qualifies his arguments with the phrase "if technological progress is allowed to continue," which I interpret as an implicit acknowledgement of the possibility that humans might simply wipe ourselves out before strong AI comes into existence).

This is a fun, easy, and informative read. Highly recommended for anyone just getting interested in these topics, but probably not for folks already steeped in futurist thought.
Profile Image for Harsh Pareek.
27 reviews43 followers
April 22, 2013
Excellent introductory book to read if you haven't seen anything from Less Wrong. The book generously links to the original articles if you want to read more. In fact, a third of the book is just quotes and paragraphs from other articles and posts.

This is a review book and does not introduce any new ideas -- it does however try to collate and clarify many existing ideas. Why is an "Intelligence Explosion" likely? What do we even mean by "intelligence"? Are there limits to human intelligence? Are these fundamental? What is a true AI? Is it possible to ensure it shares our values? Do we even want it to? Many of these questions do not yet have complete answers, but Luke thoroughly defines and investigates the problem space.

Criticisms: In the chapter on "Playing Taboo with intelligence", intelligence is defined as efficient cross domain optimization. However, this notion of intelligence is orthogonal to several other existing notions -- a fact which deserves mention. There are several "classic" notions of intelligence such as the Turing Test and the Chinese Room experiment. Some coverage of these and their limitations would be good.
Profile Image for Mark Moon.
160 reviews132 followers
July 20, 2016
This book does an excellent job of explaining the importance of the AI issue to laypeople. It doesn't leave anything out that I think it needs to have and it doesn't have anything I think it should have left out.

All in all, this is my new weapon of choice for convincing people that 1) AI is coming and 2) it can either be the best thing ever or the worst thing ever.
Profile Image for Gabriel.
113 reviews10 followers
February 23, 2014
This is an important subject treated (very) quickly and lightly here. The informal style serves to undermine the severity of the topic, namely that artificial intelligence has the very real possibility of being the next doomsday invention that could well kill us all. Instead, the tone gives you more of the sense that you're talking to a bright and a bit edgy person at a very interesting party.

All that said, this book is brief and snappy and absolutely worth reading. Some will get distracted by the author's discussion of religion, but try not to be. Any and all are free to maintain their religious preferences, but the point he is trying to make is that we cannot count on God to save us if Artificial Intelligence (AI) goes wrong. This book is really less about AI in terms of how it might come about or what it might do, and really more about coming to terms with the fact that should AI come into being it will be a truly significant event and one that we will need to prepare for. The only way to take those preparations seriously is to take the event seriously. This means understanding there is no easy out from the hand of God (which you may be religious and consider true - after all, God has not stopped any atom bombs, etc., so what man makes, man must pay for).

The author also spends time talking about thinking - what it is and what it means to have "intelligence" that some people won't see the point of or may get frustrated by. Again, the point is to get you to take seriously the idea that AI that can change everything about our world need not think exactly like us, and, in fact, we should expect that it won't.
23 reviews6 followers
November 16, 2013
This book is short: read it. It's a personal and fantastic exploration of machine superintelligence for the layman. The introduction is nice and human, it helps you feel the importance of the claims in here.

Let the ideas germinate and you'll achieve some greater degree of self-abnegation. If anything, the value you get per page is immense.

I took off one star because the way some of the evidence cited can bias you, and this is a book largely about de-biasing. For example, "the computers that do science" was misleading to me because I read it with sheer incredulity but ended up imaging in my mind a concept of computers doing science that is entirely different from what was actually real. So in this way, this book is a great narrative and biases you the way narratives are. Sorry bro, it may be a catch-22 to write this "properly" but it's minus 1 star for you.

This book is short and very valuable: read it.
Profile Image for Niels Bergervoet.
175 reviews5 followers
January 15, 2017
The book does a real good job explaining why AI and super human computer intelligence concern us all and are a subject to take seriously. The author gives a good overview of the dangers and opportunities of AI and why we should act on it. The book is well written and easy to read, also for a laymen like me. His arguments are well constructed and he extensivly explains his methods and why he artived at this viewpoint.
Profile Image for Timo Brønseth.
44 reviews
October 6, 2016
Good book on a very important topic. Short and easy to read. It uses the "outside view" to persuasively argue the likelihood of an intelligence explosion, and that the outcome of such an explosion depends on whether we're able to ensure that future intelligence carries our own values.

A bonus is that the first parts of the book is a great introduction to rationality. A necessity if one wants to introduce the absurd topic of AI risk to laypeople.
Profile Image for Viktor.
18 reviews4 followers
April 16, 2013
Even if you think the possibility of "seed AI" or even Artificial general intelligence unlikely, Muehlhauser makes an excellent case for 1) why you might be mistaken, and 2) even if it would be unlikely, it's definitely something that should be taken seriously given the possibility of both extremely good and bad outcomes.
Profile Image for Boris Vidolov.
7 reviews1 follower
December 22, 2014
I expected a deeper book with breathtaking research. As I work in the software technology sector, all points of the book are simply trivial and well known. The writing style and the idea exposure was also very primitive, as if the target audience is 7 graders. If this is the first time you hear about artificial intelligence, you may still enjoy it.
Profile Image for Terra Bosart.
57 reviews3 followers
June 20, 2013
This is an excellent primer on the though process one should engage in, regarding the serious application of safety measures toward artificial intelligence. The author is quite generous with their references, which can lead the reader into more detailed reading about the generalities presented.
Profile Image for Pete.
31 reviews
September 25, 2013
A fun look at the current state of IT learning. Worth the time if any of this trans-humanism stuff is interesting to you. It is always helpful to not get to sucked into the inevitability of this stuff.
Profile Image for Kristoffer.
69 reviews2 followers
December 22, 2013
At the very least, this book will provide you with pointers to explode your own intelligence. Which will become necessary in facing the intelligence explosion, "the most important thing we can ever do", and hence this is a most important book.
Profile Image for Katie.
17 reviews7 followers
March 2, 2014
Good overview, and a quick read. I find the link-heavy nature of it (in ebook form) distracting, but I may go back and make better use of that as a resource.
Profile Image for Sebastian Stabinger.
46 reviews1 follower
June 9, 2014
Probably one of the intellectually most rigorously written books I have read in quite some time. Kudos!
56 reviews54 followers
June 23, 2018
I have been recently following these MIRI guys and was quite fascinated with their work.

Coming back to the book. To someone who has real interest in digging out the potentials as well as risks of Artificial Intelligence, this book comes out as a major disappointment. The ideas regarding the risks of super-intelligent AI are good but supported with hardly any analysis.

The first few chapters about human thinking, rationality and its relevance to the creation of Superhuman AI are quite interesting reads. Luke's take on our flawed thought and decision process overridden by emotions than logical thinking is well demonstrated with relevant citations and examples. The idea "We start with a conclusion and then look for evidence to support it, rather than start with a hypothesis and looking for evidence that might confirm or disconfirm it." is sometimes true to our take on understanding AI.

The book touches diverse topics such as human thinking, rationality, AI, philosophy and religion & is good about the unseen capabilities of AI which are believed to be infeasible, however, the main idea of the book which is how we are spending more on increasing AI capabilities than AI safety research is only discussed in 1-2 chapters without any thorough description. I also believe that rather than assuming an attitude which instills fear towards increasing potentials of AI without sufficient reasoning and examples, a more optimistic but cautionary attitude should have been taken.

The content of the book does not do justice to its own title and introduction which is about the Intelligence Explosion. Overall, book is non-linear and lacking any profound depth.
Displaying 1 - 18 of 18 reviews

Can't find what you're looking for?

Get help and learn more about the design.