There is a widely held conception that progress in science and technology is our salvation, and the more of it, the better. This, however, is an oversimplified and even dangerous attitude. While the future will certainly offer huge changes due to such progress, it is far from certain that all of these changes will be for the better. The unprecedented rate of technological development that the 20th century witnessed has made our lives today vastly different from those in 1900. No slowdown is in sight, and the 21st century will most likely see even more revolutionary changes than the 20th, due to advances in science, technology and medicine. Particular areas where extraordinary and perhaps disruptive advances can be expected include biotechnology, nanotechnology, and machine intelligence. We may also look forward various ways to enhance human cognitive and other abilities using, e.g., pharmaceuticals, genetic engineering or machine-brain interfaces - perhaps to the extent of changing human nature beyond what we currently think of as human, and into a posthuman era. The potential benefits of all these technologies are enormous, but so are the risks, including the possibility of human extinction. This book is a passionate plea for doing our best to map the territories ahead of us, and for acting with foresight, so as to maximize our chances of reaping the benefits of the new technologies while avoiding the dangers.
Let's begin with the main take-away from this book: There could be a 50-50 chance of a catastrophic blow - or worse - to human civilization during the present century.
That's the conclusion of this reader, not the book's author. The author, however, does point out again and again that prediction about future technology and possible consequential results is very difficult. (According to this investigation it's very unclear to whom the saying "prediction is hard, especially about the future" can be attributed.) In fact, in many cases attempted predictions of various probabilities can vary by several orders of magnitude. As a result questions about when something might happen range from "sometime in the future" to "never".
There is a very concrete example of how easy it is for even the "known unknowns" to bedevil prediction - let alone the "unknown unknowns". Quantum computing is a new technology that could be realized within a decade or less (or perhaps not for somewhat longer). "Everybody" is talking about this technology, and no less than IBM, Google, and Microsoft (as well as many others) imply they're almost ready to market it. In spite of that, the technology isn't mentioned even once in the book, which was published in 2016. (It's not in the index, and if there were a mention, is was hardly noticeable.)
There are other potentially important technologies, somewhat more speculative, that were also not mentioned, such as solar power satellites. So any predictions made without taking into account such "overlooked" technologies - or technologies not even imagined yet - can be very incorrect. The societal consequences only a few decades after their invention of digital computers or microelectronics or automobiles powered by internal combustion engines could have been imagined initially only with great difficulty. Only a few decades before those inventions, almost nobody saw them coming. (Unlike power from nuclear fusion, for example, which the book does mention and which still appears, as usual, to be several decades away.)
But let's focus just on quantum computing. Here are merely a few possible consequences of realized quantum computing. (1) Incredibly deadly new chemical weapons or biological pathogens could be designed atom by atom or nucleotide by nucleotide. (2) Almost all existing encryption codes could quickly be broken, which could make it possible to crash global financial systems or disrupt infrastructure systems or commandeer military systems of national governments. (3) Cryptocurrencies like Bitcoins could be "mined" so rapidly that their values collapse. (4) Artificial intelligences that far surpass human intelligence could bootstrap their abilities in just a matter of days or weeks. Any one of these developments could easily be employed by terrorists, malicious and misanthropic hackers, or rogue states to end civilization as we know it (such as it is).
To its credit, the book spends a great deal of time discussing the capabilities and (especially) what motivations or intentions artificial intelligences might have with regards to human civilization. In fact, the potential threats posed by generic artificial intelligence are rated as among the top three most plausible threats, along with nuclear war and bioterrorism enabled by synthetic biology.
There's quite a lot more that could and should be said in a review of this book. Here's a little of that.
There are copious footnotes to the main text - 553 of them, in 249 pages. Readers who prefer to skip footnotes will regret that decision. There's also a long list of References at the back, including books, technical and academic papers, online articles, and blog posts. Some of this material could be hard to find, especially since some of it's not in English.
Some of the topics the book addresses that haven't already been mentioned include climate change and geoengineering, nanotechnology and "grey goo", and space colonization (in our own galaxy, as a way to avoid total extinction, perhaps in much of the universe). The "Fermi paradox" - if there are technologically very advanced extraterrestrials, where are they? - is also considered. It is relevant, since no evidence of extraterrestrials has yet been found. This raises the question of whether there is a high probability that any very advanced civilization will self-destruct. Another topic covered is the possibility of significant human lifetime extension. Although that's not exactly a "risk", it entails some obvious problems.
The focus on the book is primarily on "existential" risks - the kind that could lead to the extinction of the human species as we know it. Other risks, such as those from climate change, however unpleasant they could be, are probably not of this type, at least within the next few centuries. Depending on how the existential risks are categorized, there may be only half a dozen. However, each type has a number of alternative manifestations. Even if most subtypes have low probability, the collective risk is the sum of the probabilities (assuming they are essentially independent). And that doesn't even include risks that have yet to be imagined (the "unknowns").
There's less discussion on strategies for dealing with the risks than on describing the risks themselves. This is disconcerting but reasonable, since it's hard to predict what shape each risk might take. And we can't know whether technological developments we can't anticipate might also help contain the risks. For example, quantum computers may be able to model particular risk scenarios and estimate the effectiveness of strategies to counter the risks.
Advanced artificial intelligence itself (if under human control) should also be a very useful tool. It could, for instance, monitor global economic activity and political developments in order to detect patterns that indicate preparations by hostile actors to deploy highly destructive weapons. If hostile artificial intelligences ever appear, they could be opposed by friendly artificial intelligences. The book, unfortunately, doesn't go into such possibilities.
Mathematical arguments are used occasionally in the book, mostly to compute probabilities. The mathematics is fairly elementary, but some concentration is required to follow the logical argument. Some space is devoted to explain Cantor set theory, Turing machines, and the Church-Turing thesis. This is background and isn't especially critical to the rest of the book.
Many pages in the book deal with diverse philosophical questions. These include: (1) When it is appropriate to use Popperian falsificationism vs. Bayesian logic in scientific studies? (2) Is human "uploading" to computers desirable, or even possible? (3) Is it "ethical" to conceal scientific research on account of its possible misuse or disruptive potential? (4) Is unlimited human lifespan extension desirable? (5) Is "human enhancement" by genetic engineering or technological augmentation compatible with "human dignity" (if such a thing exists)? (6) What formula or algorithm is appropriate to use in order to reckon the present value of future happiness or economic benefits? (7) How is it possible to decide which potential existential risks deserve prioritization among themselves or in comparison with present needs? (Think of the debate over how much effort should be spent to ameliorate climate change.)
The author is usually not dogmatic, with most questions, about which answers he leans towards. Multiple sides of most questions are fairly presented, and the author not infrequently seems reluctant to come down on one side or another. This is appropriate due to the many uncertainties and unknowns relevant to the various questions.
Lurking behind such questions but not confronted in the book is the following question. Given that homo sapiens as a species is bound to evolve over the centuries - by Darwinian processes let alone technological innovations - how much can we apply present beliefs about human values to a species that will be increasingly different in the future? Given there are so many possibilities about what a future world could be like and how difficult it is to predict the future, how much right do current humans have to direct human evolution in one of many possible directions?
The main text is only 249 pages. So there is a vast amount of material covered in a pretty short space. Although much of the exposition is admirably clear, there are limits to how much can be explained of complicated topics to readers who haven't been exposed to them before. Readers like that should be prepared to consult other sources (e. g. Wikipedia) if they want more background.
This remark of the author may be the best summary of the book: "The future, in terms of technological and societal changes, has a tendency not to turn out as expected." (p. 97) This could be what mainly accounts for the situation the author laments as the source of "Nick Bostrom's remark that 20 times as many academic publications are published on the topic of snowboarding, compared to those on human extinction." (p. 237)
This was a very interesting, engaging, read! Olle does a great job of outlining his arguments, anticipating any questions the reader might have and addressing them instantly in the text which follows.
Although the book does a great job at "map[ping] the territories ahead of us ... so as to maximize our chances of reaping the benefits of the new technologies while avoiding the dangers" I found some more extended (philosophical, theory of science) discussion on the strong opening words on the back cover to be missing: "There is a widely held conception that progress in science and technology is our salvation, and the more of it, the better. This, however, is an oversimplified and even dangerous attitude." This is the only reason why I did not rate this 5/5.
I found many of the chapters discussed scientific issues and such that interested me and my writing, however some of the more complex issues I could not understand. Still, it was a concise compilation of great concepts and theories that were interesting to read about.
Great book on futurology, the Fermi paradox, science and especially existential risks.
The book is quite dense and a lot of topics and ideas are discussed and as a statistician Häggström isn't shy about using ideas and methods from his main field of expertise. It is a bit of a struggle for a interested non-statistician like myself to get through, but quite rewarding when you put the effort in.
I do think it will struggle a bit to find an audience. Will it is an introductory book in which a lot of concepts and problems are described with the assumption the reader haven't heard of it before, the discussions on the problems will often be hard for someone new to the problems to follow. I would describe the audience for the book as those who find Tegmarks Life 3.0: Being Human in the Age of Artificial Intelligence to shallow, but find reading or getting articles in scientific journals to hard. Not a huge potential audience. But I'm one of them. The ideas expressed in the book are not game changing (like Bostroms Superintelligence: Paths, Dangers, Strategies was), but Häggströms certainly makes good points throughout the book.
If you have some background in statistics (or something similar) or a pretty ok understanding of he field of existential risks this is a great book worth picking up. If you are completely new to the field without a relevant background, there is probably a better book out there to start with.