Jump to ratings and reviews
Rate this book

Deep Utopia: Life and Meaning in a Solved World

Rate this book
Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies (OUP, 2014) sparked a global conversation on AI that continues to this day. That book, which became a surprise New York Times bestseller, focused on what might happen if AI development goes wrong.

But what if things go right? Suppose we develop superintelligence safely and ethically, and that we make good use of the almost magical powers this technology would unlock. We would transition into an era in which human labor becomes obsolete—a “post-instrumental” condition in which human efforts are not needed for any practical purpose. Furthermore, human nature itself becomes fully malleable.

The challenge we confront here is not technological but philosophical and spiritual. In such a “solved world”, what is the point of human existence? What gives meaning to life? What would we do and experience?

Deep Utopia—a work that is again decades ahead of its time—takes the listener who is able to follow on a journey into the heart of some of the profoundest questions before us, questions we didn’t even know to ask. It shows us a glimpse of a different kind of existence, which might be ours in the future.

Audible Audio

Published October 15, 2024

521 people are currently reading
10953 people want to read

About the author

Nick Bostrom

25 books1,734 followers
Nick Bostrom is Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Strategic Artificial Intelligence Research Center. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller.

Bostrom holds bachelor degrees in artificial intelligence, philosophy, mathematics and logic followed by master’s degrees in philosophy, physics and computational neuroscience. In 2000, he was awarded a PhD in Philosophy from the London School of Economics.
He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works. During his time in London, Bostrom also did some turns on London’s stand-up comedy circuit.

Nick is best known for his work on existential risk, the anthropic principle, human enhancement ethics, the simulation argument, artificial intelligence risks, the reversal test, and practical implications of consequentialism. The bestseller Superintelligence, and FHI’s work on AI, has changed the global conversation on the future of machine intelligence, helping to stimulate the emergence of a new field of technical research on scalable AI control.

More: https://nickbostrom.com

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
194 (25%)
4 stars
251 (33%)
3 stars
199 (26%)
2 stars
91 (12%)
1 star
22 (2%)
Displaying 1 - 30 of 81 reviews
Profile Image for Brad Dunn.
351 reviews21 followers
June 18, 2024
Bostrom, the patron saint of technological maturity (or maybe AI dystopia) has written a new book, about, this time, what will happen when AI takes over everyones jobs (and I mean really takes them over) and everyone has no work. How will people live, derive meaning, and make sense of their lives. That is what this book is about.

When Super-intelligence came out (his other book) that forms the basis of most of the zeitgeist of AI alignment problems that people discuss these days, I read it maybe 10 years after it was published. What struck me by it was just how far Bostrom seems able to see into the future. When You read Deep Utopia, it seems so fantastical and ridiculous, and yet, so too did Super-inteligence when it came out.

Whereas Super-intelligence had this kind of sci-fi quality to it, Deep Utopia has a more philosophical tone. Much of the book is about meaning in a very meta sense, and the book, kind of like Godel Escher Bach, has this fiction / non-fiction swap in the chapters, where the points Bostrom makes are also articulated through humour and fiction. While the philosophical quality of the book can at times be a bit too meta to really enjoy, the really incredible parts of the book wash over you in waves, making the book well worth its complexity.

Case in point, the story of ThermoRex, a room heater who becomes sentient, and where, dear reader, we are to learn about what kind of choices one makes on a sentient room heater's behalf.

Is it worth reading? Yes. Why? Because, I suspect, 10 years from now, much of the content of this book will matter a lot more than we think it does. Bostrom is, I would argue, maybe one of the greatest living minds alive today, offering points of view that really do question much of our held assumptions about the world. If nothing else, Deep Utopia is an insight into the inner thinking patterns of someone so well versed in things like logic, it is quite a marvel to be in this kind of company, even on the page.
23 reviews
May 12, 2024
I greatly respect Bostrom as a thinker - he's helped conceive some of the memes that have been most influential in my ethical and intellectual beliefs and, therefore, my life priorities. Given this, I had high expectations of this book, but I found it moderately disappointing. Below, I'll outline three things I liked and then mention the main reason why I found it disappointing.

Three things I liked:
1. Outlining the contours of our cosmic endowment:
By making two different BOTECs of the potential size of galaxy-expanding life by making two estimates: 10^43 human lives (assuming we stay in our biological form) and 10^58 (assuming that we optimize for emulations).

2. The notion of technological maturity:
"A condition in which a set of capabilities exist that afford a level of control over nature that is close to the maximum that could be achieved in the fullness of time."

3. Meaning as Encompassing Transcendental Purpose A purpose P is the meaning of person S’s life if and only if: (i) P is encompassing for S; (ii) S has strong reason to embrace P; and (iii) the reason is derived from a context of justification that is external to S’s mundane existence.


Why I found it disappointing
The main reason for my disappointment is the incredibly messy structure of the book.
It's divided into six parts (Monday, Tuesday, ... Saturday) with an excessive number of fairly non-descriptive sub-chapters.

Profile Image for Hemen Kalita.
160 reviews19 followers
April 10, 2024
A "Post-Utopian" book. Breezy read with interesting points scattered throughout, but at the same time unnecessarily long. It employs Socratic method, presenting dialogues between a teacher and his students across six chapters (from Monday to Saturday).

These discussions explore various utopian concepts, such as emotional utopia, governance utopia, cultural utopia, post-scarcity utopia, post-work utopia, technological maturity etc.
The book also explores the challenges of deep redundancy—how to attain interestingness, fulfillment, richness, purpose, and meaning in a post-utopian society.
Profile Image for James.
111 reviews
July 20, 2024
The project of the book, as I understand it, is to map out which things humans might value that would be better achievable when civilization is not at technological maturity. Bostrom has some pretty convincing stuff to say - he basically just walks through some concerns one might have, and demonstrates that you would have to care quite a lot about some very particular and strict senses of value to not be better off at technological maturity. I think the book did a good job of examining a real worry I had about what happens if AGI goes well, but it's far from perfect. The book is pretty disorganized and rambly, and spends quite a lot of page count on weird parables that I didn't really get much out of. There are a few useful new ideas, but none that felt as powerful as the stuff in Superintelligence.

Notes:
• Technological maturity: roughly, "having all the technology". Bostrom's definition is based on the "telos of technology", which is to do more with less effort, so technological maturity is the maximization of that.
• Malthusian outcomes
○ (not a part of the main project of the book, just something to note on the way)
○ Requires coordination to not spend all money on maximum resource acquisition + reproduction
○ Not particularly special as coordination problems go, seems like basically just a negative-sum arms race.
○ Tech could make us much better at coordination, but there also exist techs that make coordination harder. This means it's hard to forecast coordination ability at maturity, and also that outcome might depend on tech sequencing. If we get good coordination tech first, we can coordinate to not develop the anti-coordination tech, but not vice versa.
• Prudential barriers
○ From tech state X, it's ex-ante not advisable to develop further tech Y, but tech Y is ex-post desirable
○ For many of these we can probably use states X', X'' etc to learn more about state Y and decide whether it's actually good or bad. But maybe some prudential barriers are actually very difficult or actually insurmountable
○ Example: a field manipulation experiment which seems to have unacceptably high risk of vacuum decay, whose result is required to reveal the physics that show it would never lead to vacuum decay and also unlocks curvature drives
○ Example: epistemic/memetic/self-modification style technologies
• Also, obviously some things we can never have: FTL, the value of BB(BB(10))
• Some things are also fundamentally scarce: positional goods
• Some other things are maybe scarcer at maturity than they are now?
○ Impact on the state of the world
○ "Purpose"
○ Novelty/discovery
• At technological maturity, we probably get "plasticity", the ability to easily shape the material world arbitrarily, and "auto-potency", the extension of plasticity to ourselves. This raises the question: is there any need for utopian citizens to be active?
• Bostrom proposes five reasons: hedonic valence, experience texture, autotelic activity, artificial purpose, social-cultural entanglement.
○ For example, my previous go-to picture of what to do in utopia is something like "play video games and build dyson spheres with my friends". But plasticity and autopotency create some problems with that picture.
§ What mental state (valence, experience texture) are you trying to achieve via this activity? Wouldn't it be more efficient to just produce that state directly via autopotency?
§ What material outcome are you trying to achieve via activity? Wouldn't it be more efficient to produce that directly via plasticity?
○ For this picture to work, it has to be pretty hard carried by "autotelic activity" and "sociocultural entanglement" - I have to be willing to say "yup I just want to hang out with my friends and for us to be doing something just for the sake of activity".
○ Which maybe I am willing to say, but pushing Sisyphus' boulder with my friends forever is a lot less intuitively appealing.
• Psychological boredom can be trivially removed via autopotency, but maybe there's some more external sense of "a boring world" that we would still want to avoid?
○ We could just warning-light-ify boredom, same with other drives that we do care about satisfying but which produce unnecessary negative valence when unsatisfied. Bostrom calls this a "prosthesis", I think "warning light" is more evocative.
○ There might be another reason not to do this: requiring "fitting valence response", separately from caring about objective boringness. Bostrom seems to mostly just address this by saying "well probably just avoiding objective boringness will take care of this". Seems right, unless you want to avoid Gettier cases.
• So, we can create whatever mental states/inclinations we like to create subjective interest in our utopian citizens. But what makes a world objectively interesting?
○ Variety? But this requires a classification of microstates into macrostates (to avoid just saying "ah yes there is one world"), and then at what scale do we do this classification? Presumably we care about multiple scales, which gets some weird results where a very interesting/diverse planet can lead to a very uninteresting universe if tiled. This also applies over time.
§ How valuable are duplicated lives?
○ Discovery?
§ The loose view: learning something new. This is satisfied by staring at quantum randomness or counting very high.
§ Various stricter requirements:
□ A discovery happening for the first time
□ A "useful" discovery, like new physics enabling new technologies that are valuable for other goals
□ A "significant" discovery, like a unification of major fields of math
□ A discovery made by aiming purely at discovery itself, not some scenic route designed to extract maximum value from the process of discovery
§ The maximally strict version rejects all autotelic activity and artificial purpose. It allows interesting discovery only when it's made for the first time, by a process aimed directly at some external goal. This truly would create a finite fuse of meaningfulness, and wouldn't even allow us to slow it down.
§ Fortunately, this maximally strict version seems quite implausible as a necessity for a valuable life. Many people never have moments that satisfy it, and still seem to us to obviously live worthwhile lives.
• Bostrom spends some time on the descriptive origins of the interestingness drive, but I don't think these are very relevant, except maybe as inputs to descriptive predictions about how CEV will shake out?
• Personal identity concerns:
○ maybe we preserve more personal identify if maturation is subjectively gradual, so we get some balancing act in determining ideal speed of enhancement?
○ Maybe personal identity erodes over time by default? Probably there are some modifications which are worth doing earlier/faster than their severity would suggest, because they slow future erosion
• Other vague value-types
○ Fulfillment: some sort of deployment of potential. Prospects look good, we'll have access to tremendous powers and self-enhancement options.
○ Richness: a great diversity of events, entangled with agency. Prospects look good, this can be achieved with a very well-designed video game.
○ Purpose: some goal which is medium or large-scale, whose achievement would be good in some sense, and which is a good fit to your capabilities.
§ Artificial purpose: I, your friend, pre-commit to be dropped into a pool of lava unless you prove this theorem within this time limit without AI assistance
§ Sociocultural entanglement is one potential source of purpose because it wants material outcomes (love/friendship in the minds of others) but comes with a bunch of built-in rules, many of which are agent-relative.
• Question: are the most powerful possible problem-solving processes conscious, or not? If they are, then "plasticity" becomes a lot more complicated, because we have to give consideration to the AI systems we use to achieve it. If they are not, then conscious agents can only ever have artificial purpose.
○ Consciousness isn't the only question here: the full thing is "is it ever desirable for a human to modify themselves to become optimal at some participant-neutral task". Maybe it's not an achievable state without discarding something important like consciousness, maybe it's not a process that's ever worth enduring.
○ But maybe some people who are particularly devoted to this kind of thing might do it!
• The Exaltation of ThermoRex
○ Feels like an even stronger version of "CEV of a sheep" problem
• Should we "let our evaluative pupils dilate in utopia"? Like I kinda catch the intuition but surely wasted motion is still just wasted motion, right? The target here is something like "broader attunement to the things I in fact find valuable", not just weaker evaluative standards.
• Bostrom's objective/subjective spectrum isn't actually a spectrum, hard switch happens in 2nd to last step once a normative abstraction gets reified ("my best interest", "best for the world")
Profile Image for Ryan.
1,381 reviews196 followers
November 27, 2024
This was such a huge disappointment, as I loved Superintelligence and some of Bostrom's other writing, but this book was bad in basically every way. I finished it due to reputation of author (and hoping it would get better), but it didn't. I'd read Superintelligence 2-3x in a row before even skimming this.

Why was it bad? The obnoxious "socratic dialogue" style of presentation, which basically expanded the text 4x with no value, preventing skimming, and generally being in no way positive. The audiobook's pretentious narrator (minor point). The lack of any novel ideas or coherent framework or really anything good.

The only redeeming value in this book was the "Adventure of ThermoRex" -- a space heater, bequeathed a vast fortune by human owner, then raised to sentience by the foundation which administers that fortune. It was an interesting way to show some of the earlier ideas partially addressed in the book (what gives life meaning? what if something is not as sophisticated as those around it?) but most of the worthwhile potential of this story wasn't exploited.

Bad/stale ideas, presented badly. Bad enough that I won't read another Bostrom book without strong evidence it's more like Superintelligence than like this.
Profile Image for Daniel Hageman.
368 reviews52 followers
December 20, 2024
"I'm saying little about suffering, here, because the theme of these lectures is 'utopia' not 'dystopia'. But for the avoidance of any doubt or any misinterpretation, let me again stress that when we make all-things-considered decisions about how to proceed into the future, the mitigation of suffering, especially extreme suffering, ought to be a criterion of the greatest, and possibly paramount, importance." - Friday - Part 1
Profile Image for benny b.
81 reviews2 followers
May 10, 2025
Ingested this over the course of a difficult few months. A philosophical treatise according to the dust jacket but it’s hard to pin down what it really is. I think Bostrom understands that these posthuman concepts and dilemmas are much more digestible within a narrative structure, which is maybe why he structured the book in such a bizarre way. Namely, interleaving 4 different narratives around the philosophical meat:

Layer 1: Bostrom is giving a lecture series at a university. This gives the material a socratic flair, which I actually liked. I think it fits for this type of exploratory thinking.
Layer 2: A group of pedantic non-human (?) entities attend the lecture series and discuss the homework assignments. I got the sense that these beings represented humans writ large, trying to decide their posthuman fates.
Layer 3: A long fable about a fox and a pig trying to thwart malthus and establish utopia (they fail).
Layer 0: Bostrom is apparently somewhat at odds with the university in the book? Allegedly this stems from real life frictions.

It’s all very disjoint and fragmented. It’s also evident that each layer (especially 3) is dripping with symbolism, reference, maybe even wordplay. Almost all of it escapes me. Very raw and experimental, almost unfinished, loose ends. I loved it, it felt like the Pale King.

The other unexpected aspect of the book is that its not philosophy at all: it’s a self help book. Most of the futurist concepts have already been adequately explored through science fiction, which the author acknowledges and cites. Fixing these stars, the work concerns itself with rigorously examining and uncovering what it means to live a good, fulfilling life. What is a life worth living? What is a waste? How ought one make good use of one’s divine inheritance. Am I? What am I missing?

There are many brilliant turns hidden among the drudgery. I truly believe that Bostrom could write brilliant fiction. Maybe I'll harass him online to do so. But I want to leave one passage that came out of nowhere.

> You have been put in charge of an entire life’s worth of human conscious experience: your own. This human life is at the mercy of your dictatorial powers during every waking hour of its existence. What an absolutely fearsome responsibility you have! If I had to guess, I would say that the average adult maybe ought to be responsible for about one year of human life... But to be responsible for an entire human life—and some would think without even the possibility of a do-over at the end—well, that is just too much.

Intimidating? Yes. But I don't think the point is to despair. It's to highlight the unimaginable burden we've been given, and what an achievement it is that every one of us rises to the occasion.

Melville:
> But as in landlessness alone resides highest truth, shoreless, indefinite as God- so better is it to perish in that howling infinite, than be ingloriously dashed upon the lee, even if that were safety! For worm-like, then, oh! who would craven crawl to land! Terrors of the terrible! is all this agony so vain? Take heart, take heart, O Bulkington! Bear thee grimly, demigod! Up from the spray of thy ocean-perishing- straight up, leaps thy apotheosis!

Profile Image for Adam.
271 reviews17 followers
February 4, 2025
Not his best. Some cool ideas, some I've though about before, but amongst some bits of insight is far too much just repetitive dwelling on the same thoughts. The frame story wasn't interesting and some of the thought experiments seemed pretty pointless. Sorry I do not care about the little heater and I'm still not sure what the point of that was.
131 reviews1 follower
March 23, 2025
Quite fascinating at times. But also dense and eventually a little repetitive.
Profile Image for Pocho.
4 reviews3 followers
April 12, 2025
This book is an amazing exploration of life and purpose. Highly recommended. The only pieces I didn’t enjoy were the side stories.
Profile Image for Jacob Williams.
620 reviews19 followers
September 7, 2024
…the value of one’s opinions, in a matter like this, is a function of how generously one has allowed the alternatives to play with one’s soul.[1]



People talk down the idea of living in a fool’s paradise. But when one considers the nature of humanity, might it not seem that such a destination would be very suitable and desirable for us? I mean: If we are fools, then a fool’s paradise would be exactly what we need.[2]



This is a meandering and playful book. I wasn’t really in the right mindset to read it, but I enjoyed some parts and there was thought-provoking stuff throughout.

1. Space heaters

My favorite part was, easily, “The Exaltation of ThermoRex”, a short story with the following premise:

Heißerhof, the country’s leading industrialist, had bequeathed his vast fortune to a foundation established for the purpose of benefiting a particular portable electric room heater. We will refer to this room heater by its brand name, “ThermoRex”. Heißerhof, who’d developed a reputation as being a bit of a misanthrope, had often been overheard saying that ThermoRex had done more for his welfare and comfort than any of his human companions ever had.[3]



2. Money

I like to think that increasing material prosperity will, because of the diminishing marginal value of money, eventually make people work less and enjoy more leisure time, as well as be more generous with others. Bostrom gives some reasons not to feel too sure of this, including:

Technological progress might create new ways of converting money into either quality or quantity of life, ways that don’t have the same steeply diminishing returns that we experience today.

For example, suppose there were a series of progressively more expensive medical treatments that each added some interval of healthy life-expectancy, or that made somebody smarter or more physically attractive. For one million dollars, you can live five extra years in perfect health; triple that, and you can add a further five healthy years. Spend a bit more, and make yourself immune to cancer, or get an intelligence enhancement for yourself or one of your children, or improve your looks from a seven to a ten. Under these conditions—which could plausibly be brought about by technological advances—there could remain strong incentives to continue to work long hours, even at very high levels of income.[4]



3. Malthus

Obstacles to achieving utopia are not the main focus of the book, but it does spend some time on them. This includes some interesting discussion of population growth and the risk that it will—in the very long run—push society into a state where everyone lives at subsistence level. Bostrom thinks that, notwithstanding short-term trends for rich people to have fewer children, such a Malthusian condition will some day be inevitable unless it is averted by global population control.

Even space colonization can produce at best a polynomial growth in land, assuming we are limited by the speed of light—whereas population growth can easily be exponential, making this an ultimately unwinnable race. Eventually the mouths to feed will outnumber the loaves of bread to put in them, unless we exit the competitive regime of unrestricted reproduction. (Please note that this is a point about long-term dynamics, not a recommendation for what one country or another should be doing at present—which is an entirely different question altogether.)[5]



Robin Hanson tried to make subsistence-level existence sound not-terrible in The Age of Em (review), but I still find the idea pretty depressing. Bostrom gave me an extra reason to be depressed about it by explaining how technological advancement might (possibly) even reduce the level of welfare that a subsistence-level existence entails:

…you could have a model of fluctuating fortune within a life, where an individual dies if at any point their fortune dips below a certain threshold. In such a model, an individual may need to have a high average level of fortune in order to be able to survive long enough to successfully reproduce. Most times in life would thus be times of relative plenty.

In this model, inventions that smooth out fortune within a life—such as granaries that make it possible to save the surpluses when times are good and use them in times of need—lead to lower average well-being (while increasing the size of the population). This could be one of the factors that made the lives of early farmers worse than the lives of their hunter-gatherer forebears, despite the advance in technology that agriculture represented.[6]



4. Deep redundancy

The book’s main focus is to consider what the implications would be if we someday reach the following state:

Technological maturity: A condition in which a set of capabilities exist that afford a level of control over nature that is close to the maximum that could be achieved in the fullness of time.[7]



This would include having superintelligent AGI and having extremely fine-grained control over the physical world, including our own bodies and minds.

What would we spend our time doing in such a world? Some people worry that no activity we could undertake would have any purpose any more. Bostrom distinguishes two versions of this:

The traditional and relatively superficial version of the purpose problem— let’s call it shallow redundancy—is that human occupational labor may become obsolete due to progress in automation, which, with the right economic policies, would inaugurate an age of abundance. …

The solution to shallow redundancy is to develop a leisure culture. Leisure culture would raise and educate people to thrive in unemployment. It would encourage rewarding interests and hobbies, and promote spirituality and the appreciation of the arts, literature, sports, nature, games, food, and conversation, and other domains…

The more fundamental version of the purpose problem…—let’s call it deep redundancy—is that much leisure activity is also at risk of losing its purpose. … It might even come to appear as though there would be no point in us doing anything—not working long hours for money, of course; but there would also be no point in putting effort into raising children, no point in going out shopping, no point in studying, no point in going to the gym or practicing the piano… et cetera.[8]



Why? Because technology could do it all better. Parenting: superintelligent parenting bots (presumably with human-like artificial bodies) could give children all the love and support they need, while being much better than we are at avoiding the infliction of any accidental psychological damage on the child. Shopping: AI models could know your preferences better than you do, and satisfy those preferences most effectively by making purchases without your involvement. Studying, gym, piano: machines and/or drugs could directly alter your mind and body to give you the skills without doing the work, and also to give you any desirable feelings that typically go along with doing the work.

((Aside:

Having recently read a book on the Halting Problem and related topics, I’m primed to push back on the idea that a model could perfectly predict your preferences. Maybe that’s possible—online advertising is sometimes pretty good at it already—but maybe we’ll hit a point where further increases in accuracy aren’t possible without basically running a full simulation of your brain, which might defeat the purpose.

Bostrom raises a related caveat when discussing brain editing:

In order to work out how to change the existing neural connectivity matrix to incorporate some new skill or knowledge, the superintelligent AI implementing the procedure might find it expedient to run simulations, to explore the consequences of different possible changes. Yet we may want the AI to steer clear of certain types of simulation because they would involve the generation of morally relevant mental entities, such as minds with preferences or conscious experiences. So the AI would have to devise the plan for exactly how to modify the subject’s brain without resorting to proscribed types of computations. It is unclear how much difficulty this requirement adds to the task.[9]


))

Against the specter of deep redundancy, Bostrom proposes “[a] five-ringed defense”[10] which I think I largely agree with. But the first ring, “hedonic valence”, is the one I’d put the most weight on: essentially, that even if we had no role in the future other than to passively enjoy the blessings prepared for us by our AI caretakers, that could in fact be a pretty amazing and wonderful future.

In addition to “purpose”, the book discusses related questions like whether we could find “fulfillment”, “richness”, and “meaning” in utopia. It’s basically baked into the idea of “technological maturity” that we could feel like we had all those things: we could (more or less) make ourselves feel any way we wanted to by directly inducing the right states in our brains. But you might worry that something is lost if those feelings are not grounded in objective reality—if your life merely felt meaningful instead of being meaningful. Much of the book is devoted to seeking objective sources of fulfillment/meaning/etc that would not be undermined by the conditions of technological maturity. I respect the endeavor, but since I’m drawn to a hedonistic theory of value it’s hard for me not to suspect that it’s a bit of goose chase.

[1] Nick Bostrom, Deep utopia: life and meaning in a solved world (Washington, DC: Ideapress Publishing, 2024), 5.

[2] Ibid., 299.

[3] Ibid., 366.

[4] Ibid., 9–10.

[5] Ibid., 56.

[6] Ibid., 25–26.

[7] Ibid., 62.

[8] Ibid., 147–49.

[9] Ibid., 140.

[10] Ibid., 151.

(crosspost)
Profile Image for Alvaro Sánchez.
93 reviews8 followers
October 16, 2024
Muy buen libro sobre un tema que ya me corría por la cabeza. A veces creo que es innecesariamente largo y tiene una estructura extraña.
84 reviews74 followers
April 24, 2024
Bostrom's previous book, Superintelligence, triggered expressions of concern. In his latest work, he describes his hopes for the distant future, presumably to limit the risk that fear of AI will lead to a The Butlerian Jihad-like scenario.

While Bostrom is relatively cautious about endorsing specific features of a utopia, he clearly expresses his dissatisfaction with the current state of the world. For instance, in a footnoted rant about preserving nature, he writes:

Imagine that some technologically advanced civilization arrived on Earth ... Imagine they said: "The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads ... What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony." ... this would be appallingly callous.


The book begins as if addressing a broad audience, then drifts into philosophy that seems obscure, leading me to wonder if it's intended as a parody of aimless academic philosophy.

Future Technology

Bostrom focuses on technological rather than political forces that might enable utopia. He cites the example of people with congenital analgesia, who live without pain but often face health issues. This dilemma could be mitigated by designing a safer environment.

Bostrom emphasizes more ambitious options:
But another approach would be to create a mechanism that serves the same function as pain without being painful. Imagine an "exoskin": a layer of nanotech sensors so thin that we can't feel it or see it, but which monitors our skin surface for noxious stimuli. If we put our hand on a hot plate, ... the mechanism contracts our muscle fibers so as to make our hand withdraw


Mass Unemployment
As technology surpasses human abilities at most tasks, we may eventually face a post-work society. Hiring humans would become less appealing when robots can better understand tasks, work faster, and cost less than feeding a human worker.

That conclusion shouldn't be controversial given Bostrom's assumptions about technology. Unfortunately, those assumptions about technology are highly controversial, at least among people who haven't paid close attention to trends in AI capabilities.

The stereotype of unemployment is that it's a sign of failure. But Bostrom points to neglected counterexamples, such as retirement and the absence of child labor. Reframing technological unemployment in this light makes it appear less disturbing. Just as someone in 1800 might have struggled to imagine the leisure enjoyed by children and retirees today, we may have difficulty envisioning a future of mass leisure.

If things go well, income from capital and land could provide luxury for all.

Bostrom notes that it's unclear whether automating most, but not all, human labor will increase or decrease wages. The dramatic changes from mass unemployment might occur much later than the automation of most current job tasks.

Post-Instrumentality Purpose

Many challenges that motivate human action can, in principle, be automated. Given enough time, machines could outperform humans in these tasks.

What will be left for humans to care about accomplishing? To a first approximation, that eliminates what currently gives our lives purpose. Bostrom calls this the age of post-instrumentality.

Much of the book describes how social interactions could provide adequate sources of purpose.

He briefly mentions that some Eastern cultures discourage attachment to purpose, which seems like a stronger argument than his main points. It's unclear why he treats this as a minor detail.

As Robin Hanson puts it:
Bostrom asks his question about people pretty close to him, leftist academics in rich Western societies.


If I live millions of years, I expect that I'll experience large changes in how I feel about having a purpose to guide my life.

Bostrom appears too focused on satisfying the values reflecting the current culture of those debating utopia and AI. These values mostly represent adaptations to recent conditions. The patterns of cultural and value changes suggest we're far from achieving a stable form of culture that will satisfy most people.

Bostrom seems to target critics whose arguments often amount to proof by failure of imagination. Their true objections might be:

* Arrogant beliefs that their culture has found the One True Moral System, so any culture adapting to drastically different conditions will be unethical.
* Fear of change: critics belonging to an elite that knows how to succeed under current conditions may be unable to predict whether they'll retain their elite status in a utopia.

The book also dedicates many pages to interestingness, asking whether long lifespans and astronomical population sizes will exhaust opportunities to be interesting. This convinced me of my confusion regarding what interestingness I value.

Malthus

A large cloud on the distant horizon is the pressure to increase population to the point where per capita wealth is driven back down to non-utopian levels.

We can solve this by ... um, waiting for his next book to explain how? Or by cooperating? Why did he bring up Malthus and then leave us with too little analysis to guess whether there's a good answer?

To be clear, I don't consider Malthus to provide a strong argument against utopia. My main complaint is that Bostrom leaves readers confused as to how uncomfortable the relevant trade-offs will be.

Style
The book's style sometimes seems more novel than the substance. Bostrom is the wrong person to pioneer innovation in writing styles.

The substance is valuable enough to deserve a wider audience. Parts of the book attempt to appeal to a broad readership, but the core content is mostly written in a style aimed at professional philosophers.

Nearly all readers will find the book too long. The sections (chapters?) titled Tuesday and Wednesday contain the most valuable ideas, so maybe read just those.

Concluding Thoughts

Bostrom offers little reassurance that we can safely navigate to such a utopia. However, it's challenging to steer in that direction if we only focus on dystopias to avoid. A compelling vision of a heavenly distant future could help us balance risks and rewards. While Bostrom provides an intellectual vision that should encourage us, it falls short emotionally.

Bostrom's utopia is technically feasible. Are we wise enough to create it? Bostrom has no answer.

Many readers will reject the book because it relies on technology too far from what we're familiar with. I don't expect those readers to say much beyond "I can't imagine ...". I have little respect for such reactions.

A variety of other readers will object to Bostrom's intuitions about what humans will want. These are the important objections to consider.

P.S. While writing this review, I learned that Bostrom's Future of Humanity Institute has shut down for unclear reasons, seemingly related to friction with Oxford's philosophy department. This deserves further discussion, but I'm unsure what to say. The book's strangely confusing ending, where a fictional dean unexpectedly halts Bostrom's talk, appears to reference this situation, but the message is too cryptic for me to decipher.
98 reviews3 followers
May 21, 2024
I was intrigued by the subtitle of this book: ‘Life and Meaning in a Solved World.’ With the pace of change and technological maturation, we might, at some point, reach a state where there is little to no work for human beings. How would we still find meaning in life, since we derive a large part of our meaning from work and solving problems? An interesting research question, poorly executed. Too complex, too many pages, and still in the dark about why Nick embedded a fairytale in his book. I would not recommend this book to anyone.

Back to the book…

A quote from Keynes continues to echo in many books – he predicted that our workweek would decrease to just 15 hours a week. Mind you, he made this prediction back in 1930. We are still far off, and this is mainly because we are using our wealth for consumerism; however, with the pace of Artificial Intelligence and Robotics, it could happen quickly. If this becomes cheap enough, it will push humans out of the labor market.

In his extensive writing, Nick posits that if we want to sustainably improve the living standards of all humans, we need to control populations – also known as the ‘Malthusian State.’ Simply put: the mouths to feed will outnumber what we can produce. Technology can only help us so far; it still comes down to the scarcity of land. I wonder how big of a problem this is, assuming that affluent people will have fewer children (there is some scientific evidence for this).

In his book, he also outlines some of the limitations of ‘technological maturity’ – which is described as ‘a condition in which a set of capabilities exists that afford a level of control over nature that is close to the maximum achievable in the fullness of time.’ Some of the limitations are cosmological in nature (what is within our reach is finite), prudential barriers (perhaps some technologies are too dangerous), and/or axiological (simply not aligned with our values).

Idle hands are the devil’s workshop’, so in a solved world, how will we respond to an abundance of leisure and wealth? We don’t really know. We could extrapolate how kids behave, aristocrats, or retirees, for instance. However, they are all part of a wider system where human beings work. Like Nick stated, maybe freedom is a void depleted of purpose. But this could be solved with artificial purpose – we could engineer any experience (power over nature). To me, this would mean the end of being human.
Profile Image for Keijo.
Author 8 books13 followers
July 7, 2024
Some interesting ideas but painfully boring and very badly written. Contains lots of pointless filler. Bostrom's lack of engagement with the concept of utopia (and its history) frustrated me a lot given the explicitly utopian nature of the book.
17 reviews
June 6, 2024
The book contains interesting ideas, but its bothersome structure and excessive filler material make it feel long and sparse.
Profile Image for Adam.
194 reviews11 followers
September 1, 2024
Interesting topic but presented in a confusing, long-winded, and frankly boring way. Massive disappointment after Bostrom's book on superintelligence, which I thought was excellent.
Profile Image for Hasta Fu.
119 reviews2 followers
July 11, 2025
Notable Quotes
1. On the Nature of Utopia:
“In a solved world, where most of the problems we currently grapple with have been addressed, the challenge becomes not survival or achievement, but meaning itself.”
Paraphrased from the book’s exploration of a post-instrumental world, emphasizing the shift from survival to existential purpose. Human become the one to search for meaning. And perhaps human no more is the only living creatures dominating the world, the silicone-based creature is superior to the carbon-based creature, so the world is run by two different creature with different specialization?

2. On Human Purpose:
“When all instrumental tasks are delegated to machines, what remains for humans is the question of what we are for.”
This captures Bostrom’s focus on the “purpose problem” in a world where AI outperforms humans. Even the creation, maintenance, monitoring and destroyal of machines are done by machines, what about desposing of humans? Taking a boat that sails to somewhere, like what happened with the elves that have nothing more to achieve in LOTR?

3. On Deep Redundancy:
“The risk is not just technological unemployment, but a deeper redundancy of the human spirit—when our contributions are no longer needed, how do we define ourselves?”
A key idea reflecting the concept of deep redundancy, where humans face existential obsolescence. Will there be someone to remember you? Will you spirit be kept forever, in video? And after several generations, no one will know you.

4. On Meaning in Utopia:
“A utopia where every desire is instantly satisfied might leave us with a new kind of poverty—a poverty of aspiration.”
This highlights the potential downside of a world without struggle or scarcity. Like the experiment with mice that were given abundant food and water, which eventually stopped having sex and producing offsprings.

5. On Neurotechnology and Bliss:
“If we could engineer our minds to experience perpetual bliss, would we choose it? And if so, would we still be human?”
From discussions about neurotechnological enhancements and their implications for human identity. This brings me to a movie that a tough guy from present was time-travelled to a different era where a stranger girl invited him to her house to have sex a VR one, and after all the obstacle was overcomed, they grow a good relationship towards each other, go home and have real bodily sex. Would the first VR sex still be called sex? Have we arrived in a state where all the sensory from the physical can be reproduced by machine, so that the experience is the same? Are there a shift of morality or health concern in the utopia that even when a male and female are without any other person in the room do not conduct physical intercourse but opt to a virtual one instead? What is the meaning of sex then?

Key Concepts
1. Post-Instrumental World:
A future where superintelligent AI has solved major human problems e.g., poverty, disease, conflict, rendering traditional instrumental goals work, survival obsolete. This shift forces humanity to confront existential questions about purpose and meaning.

2. The Purpose Problem:
In a world where AI handles all practical tasks, humans must find new sources of meaning. Bostrom explores whether activities like art, relationships, games, or even engineered happiness via neurotechnology can fill this void.

3. Deep Redundancy:
The idea that humans become not only economically redundant due to automation but existentially redundant, as AI outperforms them in creativity, decision-making, and even emotional roles like parenting or companionship. After economically redundant, what are the aspects that could be not redundant?

4. Malleability of Human Nature:
Advances in neurotechnology, genetic engineering, or AI could allow humans to reshape their desires, emotions, or cognitive capacities. Bostrom questions whether this malleability could solve the purpose problem or create new ethical dilemmas. After all the advancement in medical, we still have people growing old, in pain, dying. The clinics, hospitals don't disappear.

5. Hedonic Engineering:
The concept of using technology to induce states of perpetual happiness or fulfillment e.g., through brain stimulation or virtual realities. Bostrom examines whether this would be a desirable or hollow solution to the purpose problem.

6. Play, Art, and Sociality:
Bostrom speculates that non-instrumental activities like creative pursuits, games, or deepened social connections could become central to human life in a utopia, though he questions their sufficiency for lasting fulfillment.

7. Spiritual Longing in a Solved World:
Even in a utopia, humans may retain a “spiritual longing” for transcendence or purpose beyond material comfort. Bostrom explores whether this longing could be satisfied or if it’s an inherent part of human nature.
44 reviews
July 17, 2025
Bandpass Filters:
There could also be prudential barriers that are high but not infinitely high: bandpass filters that block civilizations only within a certain range of epistem-ic sophistication-those that are too clever and coordinated to simply tunnel through yet not clever enough to climb over. Consider a bottle of liquid labeled "dihydrogen monoxide". A thirsty infant will gladly drink it, since they can't read the text. So will a thirsty chemist, since they understand that it is just water. But the slightly educated midwit will refuse to imbibe, in view of the scary-looking nomenclature. This is the bracket, by the way, which many of you are set to enter upon the deferral of your degrees.

Children
Young children in modern societies don't work. Economically and socially, their status is ambiguous. They have virtually no disposable income; yet they live in a parental "welfare state" that caters to all their needs. They are powerless, disenfranchised, and their opinions disvalued; yet they are beloved and nurtured, and their welfare is often the focal point for the people around them. There are also huge biological confounders— the fact that some situation is good for a child does not give much evidence that it is good for an adult. While children often have rich and happy lives, their experiences may therefore only be somewhat relevant to the case we are considering.

If we look at this process, we can see that the main functions performed by our education system are threefold.
First, storage and safekeeping. Since parents are undertaking paid labor outside the home, they can't take care of their own children, so they need a child-storage facility during the day.
Second, disciplining and civilizing. Children are savages and need to be trained to sit still at their desks and do as they are told. This takes a long time and a lot of drilling. Also: indoctrination.
Third, sorting and certification. Employers need to know the quality of each unit—its conscientiousness, conformity, and intelligence-in order to determine to which uses it can be put and hence how much it is worth.
What about learning? This may also happen, mostly as a side effect of the operations done to perform (1) through (3). Any learning that takes place is extremely inefficient. At least the smarter kids could have mastered the same material in 10% of the time, using free online learning resources and studying at their own pace; but since that would not contribute to the central aims of the education system, there is usually no interest in facilitating this path.

I go further and assert that as we look deeper into the future, any possibility that is not radical is not realistic.

So at technological maturity we will have the means to engineer away our ability to experience boredom, yet one might worry that doing so would have undesirable consequences because of the usefulness of boredom as a prod to push us away from boringness. If we think that being in objectively boring conditions is bad, this lends a certain instrumental value to our capacity for feeling subjective boredom.

Intrinsification: The process whereby something initially desired as a means to some end eventually comes to be desired for its own sake as an end in itself.

Interestingness and Boringness

Human lives approach fulfillment insofar as they fill their natural allotment of years with vigorous activity.

Fulfillment, Richness, Purpose

Meaning sustains motivation even during periods when immediate reward is not forthcoming.

Meaning based motivation has a degree of independence of the immediate circumstances in which we find ourselves.

In a large group, there is a free rider problem: each person’s contribution to the shared mission would typically make only a small difference to the likelihood of the mission is achieved.
Profile Image for Harry Harman.
839 reviews17 followers
Read
November 8, 2025
The telos of technology, we might say, is to allow us to accomplish more with less effort. If we extrapolate this internal directionality to its logical terminus, we arrive at a condition in which we can accomplish everything with no effort.

meaning and purpose in a “solved world”?

ensure that it serves humanity and not the other way around.

if I’m sacrificing time with friends and family that I would prefer, but then ultimately the AI can do all these things. Does that make sense? I have to have deliberate suspension of disbelief in order to remain motivated. -Elon Musk

They seek the highest expected utility.

For the most part, however, we have used our increased productivity for consumption rather than leisure.

For one million dollars, you can live five extra years in perfect health; triple that, and you can add a further five healthy years. Spend a bit more, and make yourself immune to cancer, or get an intelligence enhancement for yourself or one of your children, or improve your looks from a seven to a ten. Under these conditions—which could plausibly be brought about by technological advances—there could remain strong incentives to continue to work long hours, even at very high levels of income.

digital children

hard limit

the altruistic reason for working additional hours may theoretically get stronger the higher a person’s wages. More additional wild animal hospital rooms could be funded with an extra hour of work if your hourly rate is a thousand dollars than if you’re making minimum wage.

the number scales linearly with resources.

but then what happens if society gets sufficiently rich and utopian that there are no more people in need?

utilitarian

prima facie

derive advantages from our elevated standing—such as the perks attendant on having high social status, or the security one might hope to attain by being better resourced than one’s adversaries.

even if we have swimming pools full of cash, we still need more

the fact that our hedonic response mechanism acclimates to gains. We begin taking our new acquisitions for granted, and the initial thrill wears off. Imagine how elated you would be now if this kind of habituation didn’t happen: if the joy you felt when you got your first toy truck remained undiminished to this day, and all subsequent joys—your first pair of skis, your first bicycle, your first kiss, your first promotion—kept stacking on top of each other.

more money, or more exclusive status symbols.

income one could earn by selling one’s labor remains significant compared to the income one derives from other sources, such as capital holdings and social transfers.

imagine that you could buy an intelligent robot that can do everything that a human worker can do. And suppose that it is cheaper to buy or rent this robot than to hire a human. Robots would then compete with human workers and put downward pressure on wages. If the robots become cheap enough, humans would be squeezed out of the labor market altogether. e zero-hour workweek would have arrived.

the wages paid to human robot-designers and robot-overseers could exceed the total wages paid to workers today

Capital keeps accumulating; so eventually land is the only scarce input.

The assumption that humans will remain in perfect control of the robots is definitely open to doubt

when I speak here of “the robot population” or “the number of robots”, what I mean to refer to is the factor share of the automation sector in the economy. Rather than a population composed of some specific number of independent robots, it could all just be one integrated AI system that controls an expanding infrastructure of production nodes and actuators.

throwing spears and telling tales around campfires

Humans don’t earn any income by working but derive income from land.

Malthusian trap

“square-cube law” Running things at scale therefore tends to lower unit costs.

(“O’Neill cylinders” refers to a space settlement design proposed in the mid-seventies by the American physicist Gerard K. O’Neill, in which inhabitants dwell on the inside of hollow cylinders whose rotation produces a gravitysubstituting centrifugal force.)

Dyson sphere, a hypothetical system (described by the physicist Freeman Dyson in 1960) that would capture most of the energy output of a star by surrounding it with a system of solar-collecting structures. For a star like our Sun, this would generate 1026 watts.

push close to the Landauer limit of energy efficiency
Profile Image for Brian Clegg.
Author 161 books3,163 followers
April 18, 2024
This is one of the strangest sort-of popular science (or philosophy, or something or other) books I've ever read. If you can picture the impact of a cross between Douglas Hofstadter's Gödel Escher Bach and Gaileo's Two New Sciences (at least, its conversational structure), then thrown in a touch of David Foster Wallace's Infinite Jest, and you can get a feel for what the experience of reading it is like - bewildering with the feeling that there is something deep that you can never quite extract from it.

Oxford philosopher Nick Bostrom is probably best known in popular science for his book Superintelligence in which he looked at the implications of having artificial intelligence (AI) that goes beyond human capabilities. In a sense, Deep Utopia is a sequel, picking out one aspect of this speculation: what life would be like for us if technology had solved all our existential problems, while (in the form of superintelligence) it had also taken away much of our apparent purpose in life. To get a feel for this, and for Bostrom's writing style, he summarises it thus:

The telos of technology, we might say, is to allow us to accomplish more with less effort. If we extrapolate this internal directionality to its logical terminus, we arrive at a condition in which we can accomplish everything with no effort. Over the millennia, our species has meandered a fair distance toward this destination already. Soon the bullet train of machine super intelligence (have we not already heard the conductor's whistle?) could whisk us the rest of the way.

If the style is reminiscent of some half-remembered, rather pompous university lecture, this is not accidental. The starting point of the book's convoluted structure is a transcript of a lecture series given by a version of Bostrom that feels like the self-portrayals of actors in the TV series Extras, where all their self-importance is exaggerated for humorous effect.

If we only got the lectures, the approach wouldn't be particularly radical. But there is far more. Firstly, fictional students ask questions during the lecture (some subject to heavy-handed putdowns by pseudo-Bostrom). And there are three extra students who exist outside the lectures, Tessius, Kelvin and Firafix who provide a commentary on the whole process. We also get Bostrom's handouts and his reading material for the next lecture, much of which is based on the 'Feodor the Fox correspondence'. This takes the form of a (to be honest, deeply tedious) pseudo-ancient series of letters from Feodor to his uncle in a world populated by animals. I had to skip much of this to avoid falling asleep.

One way of looking at the book is that it is clever, original and verging on the mind-blowing. Another is that it's clever-clever, pretentious and often distinctly hard going. In practice, I suspect it's both. Certainly the lecture content is sometimes an opaque mix of philosophy and economics - yet despite finding reading it like walking through mud, I wanted to continue, arguably more for the satisfaction of completion than any benefit I got from reading it.

Sometimes you have to congratulate a magnificent failure, where someone has taken a huge chance, pushing the boundaries, but ultimately it doesn't work. I can't say I enjoyed reading this book - but I think I am glad that I did. However, because of its ambiguous nature, I can’t go beyond three stars.
Profile Image for Yosra Ali.
83 reviews31 followers
January 29, 2025
This book suffers from a poor structural choice, presenting itself as a sequence of lectures—though not actual lectures—interspersed with interruptions that seem intended to be humorous or thought-provoking. However, their purpose is unclear. Are they meant to entertain? Refocus the reader’s attention? Provide additional insight? Whatever the intention, I found these diversions distracting and largely unhelpful.
The book rests on the assumption that humanity has reached technological maturity, where all scientific problems are solved, and everything we need or desire is readily available. From this point, the author explores a fundamental question: What happens next? What does it mean to live in a world where all material needs are met?
To explain it better, I believe the author did a good job by introducing an intriguing distinction between shallow and deep redundancy. In shallow redundancy, machines outperform humans in all tasks, yet people still find enjoyment in doing them for the sake of it without being burdened by needs or money. However, the author argues that redundancy will ultimately become deep, meaning that even pleasurable activities will be rendered meaningless by automation. He explores this idea through case studies on exercising, shopping, and parenting, demonstrating how technology could strip these activities of any kind of relevance, importance, enjoyment or satisfaction.
From there, the book chases down how meaning and purpose might be found in such a utopian world. However, the arguments become increasingly vague and optimistic without a strong scientific or philosophical foundation. The writing shifts from structured analysis to hopeful speculation, making it difficult to determine whether the author actually succeeds in providing a satisfactory answer.
That said, certain arguments are well-articulated. One particularly engaging section challenges the assumption that a post-work society would be inherently boring or meaningless. The author suggests that such a world might, paradoxically, be overloaded with meaning, though for some, this meaning might feel artificial, or even like cheating. This leads into a deep discussion on the concept of “cheating” and how society might come to redefine or accept artificial meaning. These philosophical explorations are thought-provoking and added value to the book’s content.
However, I found the author’s optimism about a future utopia somewhat excessive. The discussion lacks clear direction, often drifting between different philosophical perspectives without fully committing to one. The author presents two key reasons for his optimism: first, he challenges the notion that a utopia lacking the traditional concept of meaning would necessarily be undesirable, given how human values and perspectives might evolve. Second, he suggests that AI could generate artificial purposes if needed. However, the book offers little justification for why such an optimistic vision is realistic or directional rather than mere wishful thinking.
Overall, while the book presents intriguing questions and some compelling arguments, its structure, interruptions, and lack of concrete support for its conclusions make it a frustrating read. It raises potential utopian possibilities but ultimately falls short of providing a convincing or well-grounded vision of the future.
19 reviews
July 6, 2025
This is probably the only book you need to read about how to plan for a life of meaning in the post-singularity technological utopia.

I mean that in two different ways: first, that this book is indeed comprehensive and well-written. Bostrom is remarkably talented at writing readable prose about complex topics, and if you read the first few pages, you are likely to be grabbed and commit to all 500 (provided you are remotely interested in the topic, of course).

The book is structured as pseudo-fictional, with a character of "Bostrom" giving six days of lectures on the topic of utopia, and several "student" characters attending said lectures and interacting with each other in between days. This develops a rhythm of direct philosophy alternating with more whimsical vignettes, often couched as the "assigned readings" given to students, which are somewhat related to the philosophy at hand, but also stand alone as very entertaining. "The Elevation of ThermoRex", a short story about an industrial billionaire who dies and leaves his fortune in trust to his space heater, is a true standout, and is arguably worth the price and effort of the whole book on its own. The structure and quality of the book as a whole carries shades of Hofstader's "GEB" (Godel, Escher, Bach), which is about as high a compliment as I can give.

But when I say "this is the only book you need on the topic", I also mean it somewhat cynically. In my opinion, Bostrom's greatest success in this book was reassuring me that, yes, we can find meaning in a solved world. He does so at length, by examining many angles, existing philosophies, arguments and counterarguments, and so on. He even leaves crumbs of what he thinks the best solutions might be, but he (wisely, I would say) also states that so much will happen between now and then that the best solutions will change and be discovered over time. So: others may disagree, but I'm satisfied with his answer. Should the utopia arrive, I'm ready. We'll figure it out.

Is it irresponsible that the book ignores the many, many reasons why we may FAIL to achieve this utopia (misaligned AI being the most likely reason, though there are also others)? It's hard to say. On the one hand, I do admire Bostrom for narrowing his focus towards writing a definitive work on this specific slice of future possibilities, and allowing the many words spilled about how to get there - including his own book, Superintelligence, from a decade ago - to speak for themselves. But on the other hand, when the conclusion is so relatively anodyne, the book ultimately feels more like a nice walk on a breezy summer than a truly impactful work of philosophy. So, here in the pre-utopian 2025, I recommend Deep Utopia as an enjoyable reading experience more than as a cornerstone of instrumental philosophy. But I recommend it nonetheless.
Profile Image for Seth Benzell.
261 reviews15 followers
December 17, 2024
Erudite (and self-indulgent, though mostly endearingly and consciously so) discussion of the meaning of life, what a perfect utopia might achieve, and a discussion of values themselves. It's presented as a lecture series with absurd, and mostly fun, homework assignment asides about a EA-Fox trying to save his forest and an absurdly rich space heater experiencing its best possible existence.

I particularly liked:
>The discussion of Sisyphus, and what exactly makes his existence so canonically meaningless. He considers several variations, inviting the reader to consider which would exactly make his life meaningful.
>The discussion of interestingness as a value, whether there is a finite amount of it in the universe, and possible etiologies of it's value
>The 'paternal loving benevolence to a thermostat' story is a bit too long, but I think is very good and provocative

Stuff that I knew/unnescessary
>The very big numbers about how great utopia could be, and all the cool techs we could have
>The first few chapters about how automation will make a 'big rock candy mountain' utopia possible is old-hat (although I do like how he dismissively shits on the person raising global warming as some kind of fundamental bar on humanity's rise)
>I think the discussion of ‘almost complete automation’ was superficial, and not really necessary here

I wish there were more:
>His discussion of malthusian traps upon traps is cool, but I feel doesn't engage fully with the idea that both: darwinian genetic evolution has been mostly tamed by humans & Darwinian cultural evolution is much more speculative and contingent & darwinism generally is only the tautology "That which survives, survives." In a future with immortal beings, Darwinism may manifest as competition to have an increasingly secure, big, and content (given that suicide will be a major cause of death) immortality, rather than creating a bunch of agents who are barely above subsistence.
>The author has a cool introduction of a discussion about the moral status (and power) of fictional and simulated persons, hinting that this might limit the kind of aesthetic and ludotory experiences possible (i.e. you wouldn’t be allowed to torture a sufficiently advanced NPC in utopia, for moral reasons). I wish there were much more discussion of this specific constraint, because as someone who feels like playing elaborate RPGs might be part of their utopia, this seems like it might be the most binding one (not that I want to torture anyone! Just that there might be characters that have tragic fates in my ideal fantasy RPG).
>Engagement with Neitzche who gets a nod, but is mostly swept aside as speaking to an age with very different challenges.
>Do foxes actually eat carrots?
Profile Image for Max Blair.
58 reviews
January 17, 2025
There are so many interesting ideas explored in the book. For the first third or so I felt like an alternate version of myself was talking to me about ideas I often have- a version that had fleshed those ideas out and thought about them more deeply than I have. The questions of what utopia might look like, what different forms it might take, and how our lives would be radically different are all addressed. It comes across as more than a philosophical exercise- it feels like Bostrom believes such states are truly attainable and that we should work toward them. This interest in the longterm trajectory of humanity is one I share and I found it encouraging to hear someone of Bostrom's intellectual level lend credence to that interest.

The format of the book is playful, often with dialogues and fictional interludes. At times I found this charming while at others it seemed unnecessary and made the book drag on a bit too long. The 'Feodor the Fox' sections seemed to be setting up some big allegorical revelation but ended up just fizzling out while the 'ThermoRex' short story had an interesting conclusion but could have been rendered in many fewer pages.

I appreciated very much Bostrom's continued insistence that Utopia would include the well-being of non-human animals and the subtle references throughout that we should move away from using animals for food.

Ultimately in 'Deep Utopia' I found an inspiring vision for what our future could be and motivation to work toward making that possible future a reality. There is a thought experiment at the very end- if you were offered a gamble where one outcome is that the rest of your life would be the optimal trajectory for you (which could include being around to experience longevity breakthroughs and physical and cognitive enhancements and thereby live for many thousands or millions of years in deep fulfillment and pleasure) and the other outcome is that you drop dead immediately, what percent chance would the optimal trajectory outcome need to have for you to take the gamble? It's a fascinating question and, even if we will never be presented with such a gamble, in a way I feel that we are faced with that choice every day by how we choose to live our lives. Of course we cannot control everything, or even very much, but to the extent that we can choose to live in certain ways, we can try our utmost to live on the utopian trajectory with every decision. I found this inspiring.

There's so much in this book, it would probably be worth another read at some point. I'm definitely going to read other Bostom writings going forward.
Profile Image for Ayman Jabr.
20 reviews
November 28, 2024
This book is what I was searching for in a long time, it goes deep into the core topics, ideas, "problems", that would come out when we reach "The end of history", when and if we manage to solve every single problem that we might encounter, these are the problems that we will still face, namely the problem of meaning, the main topic discussed in this book is "What do we do when we have reached utopia and there are AIs/robots that can do everything for us but much more effeciently, and most importantly, why bother to do anything at all?"

Bostrom in this book makes the clearly stated assumption that not only we will have well-aglined AIs, and that we will solve societies disfunctionalities, and that we will solve all ethical considerations that stem from creating concious "machines", he assumes all of these will be solved, and starts from there discussing the problems that we will still always face.

He discusses in depth the core issue of why do anything at all, when not only jobs will be better done by AIs, but even each and every possible game and social activity, when an AI knows and can reccomend you things that you will like better than you can yourself, when it can raise your children in the best possible way for the child development down to the milliseconds of their lifes.

Bostrom in each chapter states clearly the case being discussed, he then discards various ideas and objections that people tend to raise or associated with the given case, thus providing a very stable base to go deep on the core issues of any given topic without having distractions interfere, I believe this is possible because Bostrom has given so many lectures on these AI topics, and has collected the diverse questions, objections and arguments that people tend to have.

The storytelling in this books is also exceptional, it blends lectures that Bostrom gives, with POV perspectives of a bunch of people assimilating and discussing the rammification of these lectures, with "lecture notes" that give examples and go in detail on specific topics that Bostrom wants to focus on, with a story about anthropomorphic animals that discuss other philosophical/societal problems. I have never read a book that could blend all of these togheter so well, no topic feels to heavy at any point, nor do you feel bored by a topic that drags on too long, nor does Bostrom shy away from addressing the core issues of any topic.


Absolutely loved this book, it is exactly what I was looking for for a long time, 10/10
1 review
April 25, 2024
An interesting topic but the book was disappointing. Too many things I simply cannot understand:
1) Horrible way to name chapters from Monday to Saturday. Maybe it's some kind of philosophical joke that I didn't understand, but I've never seen such meaningless chapters in a book before.
2) Lengthy, pointless and boring animal adventures at the end of the chapters.
3) The speculations are too pedantic and sprawling and take attention away from the main point.

4) "Just upload your mind"?? The author, who is otherwise pedantic to the point of exhaustion, repeats this very controversial claim as if it were self-evident! A few questions that quickly come to mind:
a. Science still cannot say practically anything about the birth and essence of consciousness.
b. If a person’s mind is uploaded like a computer program, is consciousness really transferred at the same time?
c. If not, the result is a zombie who behaves outwardly like him, in which case he does not benefit from uploading anything.
d. If the result is indeed a conscious being, but he still remains conscious only in his own body, that does him no good either.
e. If the consciousness were somehow magically transferred during uploading, what would happen to the consciousness that is in the biological body? Would consciousness be shared and how would such awareness be experienced etc. etc. Alas, not a single speculation on the subject in a book of thousands of speculations.

I give two stars, because the book has a few good ideas about purpose and, as far as I know, not much has been written about purpose in techno-utopia before.
Profile Image for Rob Brock.
409 reviews12 followers
November 9, 2024
I read this book as part of my exploration of artificial intelligence, and I had high expectations going into it because of the author's mega-best-seller, Superintelligence. This book was unexpected, however, because in many ways it had less to do with AI than it did with the idea of a future utopia that could be enabled by AI. The author writes that his previous book was meant to be an alarm about the dangers of AI, but this book instead is intended to look at the good that AI can mean for humanity (or post-humanity) if we can guide it toward the type of utopia we desire. After talking about the types of utopia we might imagine, he talks about the implications for humanity in this utopian world. The basic arguments in the book are interesting, but what elevates this book a little bit is the playfullness with which the author writes. The book is written as if it were a series of lectures by Professor Bostrom over the course of a week, and from the perspective of three individuals auditing the lectures and discussing them outside of class. Additionally, these auditors also consume the course reading material, some of which is provided in the framework of a parable, like a classic fairy tale. At one point, the author even appears in the book as disguised version of himself with a beard, giving a spoken word poetry performance. There were times when I found the speculation about features of an imagined utopia to be tedious, but the humor infused by these side characters and stories kept things moving forward nicely.
Profile Image for Marcus.
23 reviews
October 21, 2024
Bostrom’s previous book, Superintelligence, is a favourite of mine, so I was very much looking forward to reading this one. It was interesting. A bit of a disappointment. On first read at least.

The main problem I had with it was the style. I’ve noticed that Bostrom seems to enjoy a bit of fiction writing, and he has framed this book as pseudo-fiction. Presenting himself as giving a series of lectures discussing the ideas of utopian futures, and inspecting the values and meanings we should/want to seek. A fascinating topic indeed. He intersperses these lectures with students’ conversations, and other stories/fables about Yeah.

I quite liked the side stories (although a rather bland writing style), but I think the book would have been far better if the main part was just written as straight up non-fiction. The content just feels more conducive to that.

The main issues discussed are Big Ideas, things I love to read about; about meaning and value, and these were covered in an interesting and abstract way, potentially too abstract in parts, but I guess this is how we find general principles. I think this will make for a good re-read one fine day.
Profile Image for Leandro Melendez.
Author 1 book7 followers
August 11, 2025
Quisiera darle 5 estrellas pero el modo de escribir me hizo el libro algo pesado, la longitud me pareció un poco mas de lo que debia, algunas discusiones quedaron muy en el aire y de nuevo con la escritura creo que se usan muchas palabras rimbombantes un poco innecesariamente.

Pero despues de los aspectos en contra, el libro es fascinante con cuestionamientos un poco optimistas (al principio) sobre lo que puede ocurrir con la IA. En especial asumiendo que ayude a resolver todos nuestros problemas, en lugar de convertirse en Skynet o The Matrix.

Ahora lo profundo del libro viene cuando la cuestión no es si la IA nos va a solucionar la vida, sino que haremos una vez que la solucione? Si sacamos el dinero y la necesidad de la ecuación, que nos sacaria de la cama todas las mañanas?

Es muy interesante el pensar que nos guia, motiva u obliga en la vida una vez que quitamos las dependencias comunes como dinero, comida, placer, entretenimiento... Podrá sonar poco realista, pero creo que es muy posible que con tecnologia y automatización consigamos todo eso en el instante que lo deseemos. Entonces en esa situación, que mas podria desearse?

Muy recomendable al igual que superinteligencia. Si bien el sr Bostrom escribe innecesariamente complejo, tambien toca fabulosamente temas complejos.
Displaying 1 - 30 of 81 reviews

Can't find what you're looking for?

Get help and learn more about the design.