From Nineteen Eighty-Four to Black Mirror , we are all familiar with the tropes of dystopian science fiction. But what if worst-case scenarios could actually become reality? And what if we could do something now to put the world on a better path?
In Avoiding the Worst , Tobias Baumann lays out the concept of risks of future suffering ( s-risks ). With a focus on s-risks that are both realistic and avoidable, he argues that we have strong reasons to consider their reduction a top priority. Finally, he turns to the question of what we can do to help steer the world away from s-risks and towards a brighter future.
“One of the most important, original, and disturbing books I have read. Tobias Baumann provides a comprehensive introduction to the field of s-risk reduction. Most importantly, he outlines sensible steps towards preventing future atrocities. Highly recommended.” — David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?
“This book is a groundbreaking contribution on a topic that has been severely neglected to date. Tobias Baumann presents a powerful case for averting worst-case scenarios that could involve vast amounts of suffering. A much needed read for our time.” — Oscar Horta, co-founder of Animal Ethics and author of Making a Stand for Animals
When one truly takes the time to reflect on the nature of extreme suffering, and then puts said reflection in the context s-risks and what they would/could entail, the importance of this short book pales in comparison to little else. Baumann's didactic yet approachable book will hopefully serve as a go-to recommendation for anyone broaching the subject, and no doubt is itself a crucial step towards circumventing the potential futures that we cannot afford to ignore.
The author motivates us to focus on suffering-based ethics, assuring us that we sit at an early stage for tackling the huge suffering risks of present and future. He tells us that we are discriminating against the majority of future beings, such as humans, AI minds and farmed/wild animals when we ignore their interests.
Yes, even AI offers potential harms, not least if it was programmed to be malevolent and/or ran continuous simulations comprised of suffering individuals. Tobias goes to pains to remind us that the majority of sentient beings are wild animals. This casts great import over the idea of humans moving to other planets and presumably spreading life around the universe. It could constitute a moral wrong to increase the opportunities for harm.
Of course it is much more fundamentally sensible to focus on the reduction of suffering than the utilitarian strain that prizes creating new lives and happy agents as its goal. This introduction to the topic of s-risks informs us that suffering is ignored and downplayed by our various biases. Among the solutions for a better future Tobias suggests parliaments over presidents, proportional representation over first past the post and representation for all sentient life.
A brief, no-frills introduction to a suffering-focused (and broadly longtermist) worldview. Lacking in some concrete details because of the length, but just very resoundingly sensible in what it does say — commendable given the gravity of the subject matter.
(Potential conflict of interest: I work for the same nonprofit org as the author.)
This book is IMO the best existing introduction to the concept of suffering risks or “s-risks” for most people. The book strikes a good balance between accessibility, brevity, and nuance.
The professional narration by Adrian Nelson (host of the Waking Cosmos podcast) felt perfectly pleasant and thoughtful for the book, which discusses a lot of ideas that might otherwise be distressing to contemplate for some readers.
Both the ebook and audiobook versions are surprisingly short given how much ground they cover; IIRC, it took me around 3 hours to read the ebook, and just under 2 hours to listen to the narration at a natural 1.5 speed. This may be the most important book you can digest in such a short time.
Highly worthwhile to read it for details, but also pleasant to just listen to. :)
Contents
Baumann T (2022) (02:40) Avoiding the Worst - How to Prevent a Moral Catastrophe
Introduction
Part I: What are s-risks?
01. Technology and astronomical stakes • The concept of s-risks • S-risks, dystopia, and x-risks • S-risks, artificial intelligence, and artificial sentience
02. Types of s-risks • Incidental s-risks • Agential s-risks • Natural s-risks • Other classifications
Part II: Should we focus on s-risks? • Introduction
03. Should we focus on the long-term future? • The future could be vast • Influencing the long-term future is difficult • Are we in a good position for long-term influence? • Finding a middle ground
04. Should we focus on reducing suffering? • Suffering-focused ethics • S-risks are not extremely unlikely • A focus on suffering is at least a plausible perspective
05. Should we focus on worst-case outcomes? • Is future suffering heavy-tailed? • Reasons to avoid a narrow focus on just a few scenarios
Part III: How can we best reduce s-risks? • Introduction
07. Risk factors for s-risks • Advanced technology and space colonisation • Lack of adequate s-risk prevention • Conflict and hostility • Malevolent actors • How risk factors interact
08. Moral advocacy • Expanding the moral circle • Risks of moral circle expansion • Robust forms of moral circle expansion • Promoting concern for suffering • Conclusion
09. Better politics • The two-step ideal • Overcoming our tribal nature • Institutional reform • • Parliamentarism • • Voting reform • • Advancing and safeguarding democracy • • Political representation of all sentient beings • Is politics too crowded? • Conclusion
10. Emerging technologies • Should we focus on shaping artificial intelligence? • Technical measures to reduce s-risks from AI • Governance of AI • Space governance
11. Long-term impact • Capacity building • A movement to reduce s-risks • Research on how to best reduce s-risks • What you can do
A quick, concise book about s-risks (astronomical suffering-risks). Tightly written and not too heavy on the jargon. I finished reading in 1-2 hours. It doesn't go incredibly in depth into any particular aspect of s-risks, but I find the most interesting point is that despite the fact that we can take the optimistic view that s-risks will continue to be mitigated by advances in our technology and society, that doesn't quite fully cover agential s-risks where agents may intentionally create harm.
I liked this book. Give it a read if you have a chance.
This short book provides an introduction to an, in my opinion, very important subject: risks of extreme suffering. Though the field offers many very interesting reflections and arguments, almost all of these are cut out. This lack of depth renders the final product rather dry and unconvincing. I don't know of anything in this book that has not been said more eloquently elsewhere. I was also a bit annoyed how the figure on page 49 was referred to as being red and green even though it was printed in grayscale.
A solid crash course on Longtermism and related subjects, one of the most complicated and important - not to mention farcically under-examined - subjects known to man.
Doesn’t introduce anything new, but that wasn’t the goal.
For further reading (apart from those mentioned in the book) see Derek Parfait, Peter Singer, Nick Bostrum, Toby Ord & William MacAskill
This is a good introduction to risks of astronomical suffering (s-risks). I found myself more compelled to take them seriously after reading it.
In particular, the argument that intense suffering is way more morally urgent than a mere absence of happiness seems highly compelling. It was also useful to read examples of intense suffering in chapter 4 of Suffering-Focused Ethics: Defense and Implications and Tomasik's The Horror of Suffering, both of which I picked up due to this book.
It's probably good that the book is so short. I imagine this should've been a difficult tradeoff between conciseness and analysis for the author, but Baumann generally does a good job highlighting cruxy arguments, outlining reasons to support them, and mentioning resources to learn more about them. The book is written in the economical style of an author who’s written or lectured about their views several times, and has put some time into thinking of the most serious objections and addressing them. As such, it's difficult to summarize it: almost every chapter is just 3-5 pages long, and most paragraphs are distillations of (often several) essays.
This was a broad and concise overview of S-risks (risks of astronomical amounts of suffering) that gave me a better idea of the nature of these risks and what we can do to prevent them.
A couple example takeaways: - While the author listed specific risks we should pay attention to (e.g. simulated minds that can suffer, larger-scale factory farms), this field of study is in its infancy, and we should actively keep on the lookout. - Dark Triad traits were a recurring theme. We don't want powerful AI systems to exhibit these traits, and they could explain how some dictators rise to power.
While I appreciate the concision, I frequently found myself disappointed when the author didn't go into more detail; they deferred to the bibliography too often. For example, I would have liked to hear more about the philosophical arguments behind suffering-focused ethics, towards the beginning of the book. One of the reasons I'm enjoying The Precipice so much is because Toby Ord isn't afraid to add appendices and notes about anything from utilitarianism to the Cold War. Avoiding the Worst, however, left me unsatiated. I'll have to read some more articles from The Center for Reducing Suffering and reach out to the author to ask some questions :)