Jump to ratings and reviews
Rate this book

Global Catastrophic Risks

Rate this book
A global catastrophic risk is one with the potential to wreak death and destruction on a global scale. In human history, wars and plagues have done so on more than one occasion, and misguided ideologies and totalitarian regimes have darkened an entire era or a region. Advances in technology
are adding dangers of a new kind. It could happen again.

In Global Catastrophic Risks 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general
artificial intelligence, and social collapse. The book also addresses over-arching issues - policy responses and methods for predicting and managing catastrophes.

This is invaluable reading for anyone interested in the big issues of our time; for students focusing on science, society, technology, and public policy; and for academics, policy-makers, and professionals working in these acutely important fields.

578 pages, Hardcover

First published July 3, 2008

76 people are currently reading
2114 people want to read

About the author

Nick Bostrom

25 books1,746 followers
Nick Bostrom is Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Strategic Artificial Intelligence Research Center. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller.

Bostrom holds bachelor degrees in artificial intelligence, philosophy, mathematics and logic followed by master’s degrees in philosophy, physics and computational neuroscience. In 2000, he was awarded a PhD in Philosophy from the London School of Economics.
He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works. During his time in London, Bostrom also did some turns on London’s stand-up comedy circuit.

Nick is best known for his work on existential risk, the anthropic principle, human enhancement ethics, the simulation argument, artificial intelligence risks, the reversal test, and practical implications of consequentialism. The bestseller Superintelligence, and FHI’s work on AI, has changed the global conversation on the future of machine intelligence, helping to stimulate the emergence of a new field of technical research on scalable AI control.

More: https://nickbostrom.com

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
94 (34%)
4 stars
96 (35%)
3 stars
68 (24%)
2 stars
10 (3%)
1 star
5 (1%)
Displaying 1 - 28 of 28 reviews
Profile Image for Rhodes Hileman.
21 reviews5 followers
October 25, 2014
Best collection of global risk analysis yet. Generally cogent detailed arguments on 21 separate risk types. For anyone concerned about the future, this volume is a must.
Profile Image for Rob.
154 reviews39 followers
October 17, 2012
Wish I could put 2 and a half stars. This is a strange one as it is a series of articles about global catastrophe written in a very scholarly manner. There is no apocalypse porn to look at here folks, please move on.
Various articles all come to the same conclusion that we are not really in danger of being wiped out. We (humanity that is) could go backwards or even have a catastrophe or two but we will not become extinct as long as there are at least 100 of us in bonkable condition and not to distant from one another.
We will survive.
Profile Image for Patrick.
Author 36 books36 followers
July 23, 2013
The exact opposite of light reading.


JDN 2456497 EDT 08:55.

A review of Global Catastrophic Risks by Nick Bostrom and Milan M. Cirkovic

Light reading generally requires three things: Short, easy to read, and on a light-hearted subject. This book is none of those things: it consists of over 500 pages of scientific essays on the end of the world.
Setting the tone is the first chapter by astrophysicist Fred Adams about the inevitable death and decay of the universe. The basic message is that in 100 billion years, we will be dead, so get used to it.
The rest of the book is downhill from there; topics include supervolcanoes, gamma-ray bursts, climate change, pandemics, evil AI, physics experiments that destroy the Earth, nuclear war, nuclear terrorism, bioterrorism, self-replicating nanobots, and totalitarianism. Cheery stuff, basically. Climate change is actually considered a relatively minor problem; after all, it's "only" estimated to kill about 30 million people.
The best essays are actually in Part I, about general cognitive biases and approaches toward risk. James J. Hughes wrote an excellent essay on apocalyptic and millennial ideologies; Eliezer Yudkowsky's essay on cognitive biases affecting risk judgment is brilliant. Milan M. Cirkovic's essay on anthropic biases is particularly chilling: we may think that certain events are unlikely simply because, had they happened, we wouldn't be here. The only essays in Part I that aren't very interesting are Yacov Y. Haimes' essay on "systems-based risk analysis" (which is mostly obvious common sense restated in technical jargon, e.g. "The myriad economic, organizational, and institutional sectors, among others, that characterize countries in the developed world can be viewed as a complex large-scale system of systems."), and Peter Taylor's essay "catastrophes and insurance" (which basically just talks about how insurance only works if your economy hasn't collapsed).
The essays on specific threats honestly aren't that compelling; they don't present a unified narrative or give a good sense of what risks we should be most worried about or most focused on preventing. The net effect is sort of a list of ways we could die, without a clear sense of how we should be trying to protect ourselves. The book presents itself as trying to save humanity, but ends up feeling more like a pessimist's anxiety dreams.
Profile Image for Swarm Feral.
102 reviews47 followers
April 6, 2019
I read this book chopped and screwed. First through PDFs of chapters out of order. Then the rest of the book that I missed. It is a collection of works dealing with collapse. I will here irresponsible speak of the book as a whole maybe coming back to touch it up and deal with the piece individually, but maybe not.

What stands out to me is the attention paid to thinking through AI and viral collapse. And thinking through these ideas generally. I, at first, was overly annoyed by the clear "scifi" world view of the writers. But a wise person pointed out to me that these nerds are the only people that are going to think through these things and do the work to get to the risk factors and so on.

I am less interested in risks on the astronomic level such as being wiped out by a pulsar or a black hole randomly appearing. Let alone going to the lengths to mitigate such risks as those measures seem to be impotent and thus worthless. The means of mitigation also seem undesirable.

I am skeptical of the idea that the scientific community or government will get their shit together enough to do something for the common good. They can't even address the reproducible crisis that affects most scientific studies today.

The risk of totalitarianism is also presented. A totalitarianism that might indeed arise from trying to have world government that knows what's best for it's citizen's and operates with a catastrophe-prevention level of authority.

Transhumanist and adjacent nerds who talk about existential risks and down play climate change don't pay enough heed to the fact that climate changes undesirable affects are on the very near horizon. I would however note that people seem to get stuck on the climate issue and it has been greatly politicized and recuperated.

This book helped me to think of neat scifi stories, but not only that it helped me think how I might think of risks they didn't even mention.

What comes of thinking through risk in an "on demand" economy where means economic collapse can happen in days?

What comes of thinking through living in a food-system where water sources for the majority of the food supply are drying up or soil depletion is widespread?

What comes of thinking through the storage of and creation of pandemic-capable viruses on a planet with pandemic-friendly infrastructure under the care of scientists who consistently fail their safety inspections?

What comes of thinking through the fact that self-learning AI are already on the internet and what they are doing has already gone beyond the comprehension of their makers?

Even writing these prompts the work put in to the writings in this book is painfully evident. Maybe one day I'll sit down and write down my thinking through all this, but for now I'll settle for this.

Sometimes I think about human extinction like I do my own death. A sad inevitability. But why is death sad? And end to a desirable experience? Suffering? I would say perhaps sometimes death is preferable to some existences. At what point does this come true of a civilization or group of beings? I certainly would cling to life. I know many do even in horrid conditions.

I would also like to trouble the conflation of "humanity" with civilization or even identifying the march of progress as "our project" even if it becomes a "post-human" one. I would say that I am on the side of the earth generally. I would say that I am on the side of life generally. I would even say I am on the side of humans generally. And thinking it through I would like to trouble the centering of "humanity" as a genetic species. I am more interested in "what bodies can do" than their species. But this seems to border on the tangential so I'll not go on.

The meta-element of this book was also interesting to me. The discussion of how we do our threat and risk assessment as well as our biases was indeed critically thought out and has definitely been incorporated into my thinking.
Profile Image for Jeffrey.
346 reviews6 followers
Read
November 3, 2018
I was hoping for more discussion of correlated risks and the trade-offs faced by public policy when dealing with multi-dimensional, existential risks. The editors lay out exactly why this attention to multi-dimensionality is important:


[T]here are also pragmatic reasons for addressing global catastrophic risks as a single field. Attention is scarce. Mitigation is costly. To decide how to allocate effort and resources, we must make comparative judgements. If we treat risks singly, and never as part of an overall threat profile, we may become unduly fixated on the one or two dangers that happen to have captured the public or expert imagination of the day, while neglecting other risks that are more severe or more amenable to mitigation. (pg. 2)


But, after this quote, the point is largely put aside. Much of the text is taken up with examples of different risks, many of them non-catastrophic in the existential sense. The chapter by Posner came closest to the discussion I wanted, so I am reading his book on catastrophes next.
Profile Image for Conor McCammon.
89 reviews3 followers
May 3, 2022
A thorough academic examination of catastrophic risk, which varies wildly in writing quality.

I wish I knew whether this book was intended to be exhaustive of GCR's to the extent of our knowledge, or just a review of the most striking ones. Regardless, it is an an important and thoughtful book, if overly technical in some areas.

The best chapters in my opinion:
Hazards From Comets and Asteroids
Plagues and Pandemics: Past, Present and Future (particularly prophetic)
Artificial Intelligence as a Positive and Negative Factor in Global Risk
Catastrophe, Social Collapse, and Human Extinction
The Continuing Threat of Nuclear War
Catastrophic Nuclear Terrorism: a Preventable Peril
Nanotechnology as Global Catastrophic Risk
The Totalitarian Threat

7/10
Profile Image for Aljoša Toplak.
122 reviews22 followers
December 21, 2020
I wish more people would read this book - it’s aim is to create a body of people that would have the basic tools for evaluating existential risks to humanity. The more experts would shift their attention to the study of keeping humanity safe, the more optimistic I could feel about the future.

However, I must add that it doesn't read lightly at all and sometimes feels like a flood of information. Hopefully, someone will write a more approachable version of this book someday.
Profile Image for Alexandru Tudorica.
57 reviews3 followers
July 11, 2018
The joint conclusion of the collection of essays is that human civilization has a 19% risk of being extinct by 2100.

The opportunity cost brought by delayed action against existential threats is nearly incomprehensible for the human mind - each SECOND of delayed near space colonisation denies the future existence of 10^14 humans (as a lower, conservative bound). Check this out - nickbostrom.com/astronomical/waste.html as well as counterarguments to the exponential discounting here: http://wilsonweb.physics.harvard.edu/.... Even infinitesimal action can fundamentally alter large swathes of the future, so being aware of what counts most feels imperative now.

I'm inclined to consider that superintelligent AI and molecular nanotechnology are among the most pressing and likely developments that could endanger the survival of mankind - and we should definitely start planning to get it right the first time. We'll get only one chance.

One of the most effective solutions for many of these existential risks is space colonization, since the resilience of a species greatly and qualitatively increases with the amount of space occupied and the number of distinct environments conquered (other planets, the Moon, asteroids, empty space etc). A small selection of extinction events examples that only become regional catastrophes if mitigated by space colonisation:
- a large asteroid cannot be deflected and ends life on Earth
- pandemics
- global nuclear war
- climate change
- environmental disasters
- resource depletion
- astrophysical events such as GRBs
- supervolcanism

I always get surprised when even very highly educated people disagree with space exploration using base arguments such as "children are starving/we have to build solar panels first/global inequality must be addressed first/etc, therefore it is unethical to spend money on fundamental research and space colonization". Undoubtedly, less critical threats related to global governance systems shouldn't be swept under the rug since addressing them will lower the overall existential risk anyway, but preparing for the latter must definitely be prioritized.

Profile Image for Carter.
597 reviews
September 5, 2021
This work is decidedly poor. The main bulk of the material is toward the end of the book, and for many things in the book, no calculations, or trends exist. Much of the material, is not even relevant, on some deep level. Bostrom's work, is a mix of philosophical considerations, and risk assessments. The reason this works, is due to the nature of the subject- AI. Consciousness and the mind, is poorly understood. I suspect some of the authors, haven't brought together, the relevant subjects, and elements in a lot of the pieces in this book.
Profile Image for Laurent Franckx.
255 reviews98 followers
February 22, 2017
The main problem with the book is simply that the contributions are of very unequal quality. Some are dry technical compilations of facts (or speculations) in a field, and completely inaccessible to a non-specialist. Others take a broader view, and ask deep questions. I particularly liked the chapters on artificial intelligence and totalitarianism. I had wished these had been longer.
Profile Image for Behrooz Parhami.
Author 10 books36 followers
April 29, 2025
A global catastrophic risk is one with the potential to wreak death and destruction on a global scale. Of late, we have been preoccupied with the dangers of malicious, run-away AI pushing humans aside and taking control of our world, nearly forgetting about all the other extinction-level dangers we face. In human history, wars and plagues have caused havoc on multiple occasions, and misguided ideologies and totalitarian regimes have darkened an entire era or a region. Advances in technology are adding dangers of a new kind. It could happen again.

In this edited collection, 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues–policy responses and methods for predicting and managing catastrophes. The part and chapter titles are indicative of the scope of this must-read collection for everyone, from students of science/society/technology to academics/professionals/policy-makers in these fields.

- Introduction (Why? Taxonomy and organization. Summary of the four parts)

Part I: Background

- Long-term astrophysical processes (Fate of the Earth. Life and information processing)

- Evolution theory and the future of humanity (Changes in human evolution and culture)

- Millennial tendencies in response to apocalyptic threats (Techno-millennialism)

- Cognitive biases potentially affecting judgement of global risks (Black swans)

- Observation selection effects and global catastrophic risks (Fermi’s paradox)

- Systems-based risk analysis (Defining risk. Risk of extreme and catastrophic events)

- Catastrophes and insurance (Catastrophe loss models. Extreme value statistics)

Part II: Risks from Nature

- Super-volcanism and other geophysical processes of catastrophic import (Super-eruptions)

- Hazards from comets and asteroids (Near-earth object searches. The effects of impact)

- Influence of supernovae, gamma-rays, solar flares, & cosmic rays on the terrestrial environment

Part III: Risks from Unintended Consequences

- Climate change and global risk (Climate risk and mitigation policy)

- Plagues and pandemics: Past, present, and future (Plagues of historical note. Man-made viruses)

- Artificial intelligence as a positive and negative factor in global risk. Threats and promises)

- Big troubles, imagined and real (Accelerator disasters. Runaway technologies)

- Catastrophe, social collapse, and human extinction (Distribution of disaster. Existential disasters)

Part IV: Risks from Hostile Acts

- The continuing threat of nuclear war (Calculating Armageddon. Current nuclear balance)

- Catastrophic nuclear terrorism: A preventable peril (Demand & supply sides of nuclear terrorism)

- Biotechnology and biosecurity (Why biological weapons are distinct from other WMDs)

- Nanotechnology as global catastrophic risk (Molecular manufacturing. Nano-built weaponry)

- The totalitarian threat (Stable totalitarianism. Totalitarian risk management)
Profile Image for Mark Huisjes.
36 reviews4 followers
March 6, 2024
This book is rather dense. But I suppose you can't not be thorough when you are trying to put a number on the literal end of humanity. Personally, having read all the arguments presented, I think this book is too pessimistic at times. Especially when it comes to the danger of AI misalignment. Almost all AI foom scenario's rely on nigh magical undiscovered technologies at some point and I don't see why an AI would have any advantage over humans discovering any of those. Just thinking fast won't help you at all when trying to discover a blue diode let alone Von Neumann style nanotech. You'll need a vast industrial base and endless experimentation which disembodied AI will be at a distinct disadvantage for.

On the other hand the risks of global climate change may in fact be understated because this book tries to assess only those risks which are truly catastrophic (as in actually going extinct catastrophic). This book does for example not assess the dangers of the global atmospheric circulation system with its three cells per hemisphere (Hadley, Ferrell and polar) reverting to its historic one cell if the Earth warms more than 3-4 degrees Celcius. This particular configuration is more stable at higher atmospheric temperatures and would so drastically alter the locations of all climate zones that we might as well just start over with mapping them. Luckily we are still nowhere near that kind of heating!
Profile Image for Anthony O'Connor.
Author 5 books34 followers
July 23, 2023
A lot of techno-jargon and academic hand-wringing about methodology and terminology but a bit short on real content. A Collection of essays published in 2008. There's a lot of focus on astrophysical risks - asteroids and the like. Easy enough to write about, and even quantifiable, but these don't seem to be the real pressing issues right now. Very tame on climate change, pandemics and the emergence of strong AI. IN 2023 these definitely are extremely pressing, with the world falling apart, tens of millions dead from a recent global pandemic still lingering, a proxy war between Super Powers buzzing away in Europe, and smart AI popping up everywhere, disrupting everything. The last paper is interesting. The increasing emergence of totalitarianism as opposed to already widespread authoritarianism. As a risk in itself and as something exacerbated by the others. How close is it gonna be. Ahhh, fifty fifty at this stage.
Profile Image for Marcel.
Author 2 books7 followers
October 3, 2024
Generally speaking a create collection of essays on global risk.
I guess, in hindsight, one has to read them with a bit of caution as they are all out of the existential risk and effective altruism movement, whose intentions have become less trustworthy recently...

https://forum.effectivealtruism.org/p...
https://www.politico.com/news/2023/12...
https://news.ycombinator.com/item?id=...

Nevertheless a good read on various perspectives of x-risk, and a collection that doesn't have to be read in total but rather as and where relevant...
632 reviews3 followers
February 6, 2024
Interesting book, but still I feel it is somehow obsolete and has an agenda, a globalist agenda, so the texts do read a bit like propaganda a lot of the time. Still, it is a good read, though there is a lot of information that is not new, some details were fundamental to me. Surprising enough, the text of Frank Wilczek is the first analysis of the Fermi paradox which states most correctly that advanced civilizations would most likely know how to disguise themselves, something so obvious that people do not get it.
Profile Image for Yates Buckley.
715 reviews33 followers
May 21, 2018
A very good collection of pieces about global catastrophic risks from different authors and perspectives. Not all the writing is either consistent nor the same high quality but the text really does cover in breadth most of the issues in this area.
Profile Image for Brad.
215 reviews3 followers
February 10, 2021
I'm not sure what I was expecting here, but this reads like a white paper developed by some "think tank." Lots and lots of jargon, formulas and logarithms. Sure doesn't seem for the general public.
Profile Image for Doug.
164 reviews3 followers
December 1, 2025
I guess after 13 years I'll get this off my Currently Reading list. I've been flipping through it and referring to it over the past decade. An excellent collection, even though not the most uplifting.
Profile Image for Antonio Vena.
Author 5 books39 followers
February 10, 2018
Alcuni articoli sono interessanti, altri molto interessanti e altri ancora per nulla.
Vale ogni centesimo speso soltanto per le bibliografie.
Ottimo per consultazione e ricerche.
Author 31 books83 followers
June 16, 2023
This is the opposite of light reading but absolutely fascinating.
Profile Image for Reed Caron.
6 reviews1 follower
December 30, 2017
Highlights for me were the chapters on social collapse, nuclear terrorism, and the totalitarian threat.
Chapters can be read independently of each other.
Profile Image for Greg.
88 reviews1 follower
March 16, 2017
A lot more academic than I was expecting. Though in retrospect I don't really knw what I was expecting. Several of the essays were fascinating, and some were not. definitely recommended if the subject matter interests you.
Profile Image for John.
328 reviews34 followers
December 20, 2012
There is a lot to like here, and probably should be on the shelf of everyone who works on the risk governance of large-scale risks. However, the biggest contribution of this book is that its gap reveals where to go next in integrated risk governance: actual integration of methodology and domain-specific concerns in a common framework.

This book takes the same risk as "Unmaking the West: 'What-If?' Scenarios That Rewrite World History", namely picking a very broad subject, inviting as many people that cover it from different angles to play, and watch what happens. Similar to Unmaking, the risk doesn't quite pay off, with at times wildly diverging styles and criteria, and at times repetition that makes one think that it is a shame that those authors didn't meet ahead of time to write an integrated essay. On the plus side, this volume is well-curated and it feels like the right group of people to start talking are now at the table; this is the going around the table introduction. My advice is not to read it cover-to-cover. For me, reading the methodology section and skimming the rest would make sense.

The book starts with an introduction that does a fine job of framing it, and then looks at various methodological approaches; after this are a variety of essays about first natural then human-caused risks. Disappointingly, the risk essays themselves seem to integrate very little of the methodology or analysis. It ends sharply without a conclusion. However, the reader can clearly draw the conclusion: there's a lot of work to do to put this all together.

Profile Image for Aldwin Susantio.
86 reviews3 followers
Read
March 4, 2021
The book discuss many global catastrophes in the past (like diseases, super-eruption, nuclear) and predict potential global catastrophes in the future (the end solar system, artificial super intelligence).

This book is written by many experts and cover technical and non-technical issues regarding global catastrophes. Maybe, that's the reason why this book is so scientific and not so communicative.

If you need a citation source for research paper or book, this is a great book for you. But if you only a common person looking to be enlightened, maybe this book is too boring and long, so it is not worth your time.
Profile Image for Jose Moa.
519 reviews79 followers
October 20, 2015
A exaustive examen of all class of global risks ,to the humankind and the nature caused by the own humans or by the nature forces;the list is well analiced and exaustive:economical, tecnological ,ambiental ,nuclear war ,by diseases ,impact with asteroides or comets,black holes supernovae,solar flares,artificial inteligence,nanotecnology,biotecnology ,nuclear terrorism ,totalitarism ,social colapse,and so on risks;it examines short and long term risks
Displaying 1 - 28 of 28 reviews

Can't find what you're looking for?

Get help and learn more about the design.