Jump to ratings and reviews
Rate this book

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity

Rate this book
How Silicon Valley’s heartless, baseless, and foolish obsessions—with escaping death, emergent AI tyrants, and limitless growth—pervert public discourse and distract us from real social problems 

Tech billionaires have decided that they should determine our futures for us. According to Elon Musk, Jeff Bezos, Sam Altman, and more, the only good future for humanity is one powered by trillions of humans living in space, functionally immortal, served by superintelligent AIs.  
 
In More Everything Forever, science writer Adam Becker investigates these wildly implausible and often profoundly immoral visions of tomorrow—and shows why, in reality, there is no good evidence that they will, or should, come to pass. Nevertheless, these obsessions fuel fears that overwhelm reason—for example, that a rogue AI will exterminate humanity—at the expense of essential work on solving crucial problems like climate change. What’s more, these futuristic visions cloak a hunger for power under dreams of space colonies and digital immortality. The giants of Silicon Valley claim that their ideas are based on science, but the reality is they come from a jumbled mix of shallow futurism and racist pseudoscience.  
 
More Everything Forever exposes the powerful and sinister ideas that dominate Silicon Valley, challenging us to see how foolish, and dangerous, these visions of the future are. 

384 pages, Hardcover

First published April 22, 2025

419 people are currently reading
5313 people want to read

About the author

Adam Becker

2 books256 followers
Adam Becker is a science writer with a PhD in astrophysics from the University of Michigan and a BA in philosophy and physics from Cornell. He has written for the New York Times, the BBC, NPR, Scientific American, New Scientist, and others. He has also recorded a video series with the BBC and several podcasts with the Story Collider. Adam is a visiting scholar at UC Berkeley's Office for History of Science and Technology and lives in California.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
577 (47%)
4 stars
454 (37%)
3 stars
125 (10%)
2 stars
40 (3%)
1 star
16 (1%)
Displaying 1 - 30 of 234 reviews
Profile Image for Nathan Shuherk.
393 reviews4,418 followers
November 14, 2025
“Here’s what the Silicon Valley CEOs believe and why they’re a bunch of dumbfucks that need taxed out of existence” lmao. This book is so much fun. Genuinely every bit of this is how every head line makes me feel, a great tonal battle between comedy and depression - we’re in the worst timeline, but at least we have some funny experiences explaining it all
Profile Image for Manny.
Author 48 books16.1k followers
October 7, 2025
I loved Adam Becker's previous book, on the history and philosophy of quantum physics. Now he's pointed his microscope towards the AI world. Becker's training is in physics rather than AI, and I thought this one wasn't quite as good; but "not quite as good as What Is Real?" leaves scope for being very good indeed. If you don't feel like continuing with this rather long review, put More Everything Forever on your to-read list and stop here. If you're curious to hear some rather speculative arguments, read on.

You only have to get a chapter into the book and look at the references to be convinced of two important things: Becker is very smart, and he's a very hard worker. He has read a truly impressive amount of background material and tried to meet personally with everyone important who was willing to talk to him. He presents a couple of lists in one of the appendices, both the people who consented to meet and the people who declined; there are some surprising items in each list. As with the quantum mechanics book, he's made a really determined attempt to figure out the sociology. Where do these ideas come from? Who is pushing them? Why do mainstream people in the subject believe some things that on the face of it look crazy, and show reluctance to believe other things that on the face of it are much more sensible? Where is this heading?

I lived in Silicon Valley for several years in the early noughties and still have some contacts there, and I have continued to work actively in AI and read about it; I had come across many parts of the story in various places, but it was still astonishing to see it all carefully organised. The evidence presented here is very hard to brush aside. Many of the big AI-centred companies, and the people who run them, have bought into an extraordinary philosophical picture which is well documented and easily accessible. There is more than one version, and Becker does a great job of disentangling the various splits and schisms in the Church of AI, but the basic idea is always the same. We, the human race, have developed technology that will soon give us godlike powers. It can take us off our little planet and into space. We can first colonise the Solar System, probably starting with Mars, then our galaxy, then, ultimately, the entire universe. I think I first saw this described seriously in Max Tegmark's 2017 Life 3.0 where Tegmark, approvingly, ascribes the ideas to Google's Page and Brin. It seems that a great many other influential people now agree with them.

As noted, Becker is a physicist, and he has no trouble explaining all the things that are wrong with the idea of having people colonise the universe. In fact, it's probably not even feasible for people to colonise Mars, step one on this arduous journey. It's very expensive to get from Earth to Mars; there are enormous radiation hazards, both en route and once you've arrived; you need a large starting population to avoid inbreeding; there is no way to rescue a colony that's got into trouble; and those are just the obvious things you think of in the first few minutes. It's a non-starter, and for similar reasons the other human space colonisation projects are also non-starters. But none the less, Elon Musk, who somehow has become one of the richest and most important people in the world, is constantly touting the idea of a Mars colony. He and his fellow techbro zillionaires have the ear of Donald Trump, who despite being apparently on the verge of senile collapse has become clearly the most powerful person in the world.

Well, the very smart Becker has spent years systematically researching this and I haven't. If I had any sense I would withstand the temptation to second guess him, especially since my second guess has all the hallmarks of a paranoid science fiction story. But at least I feel I have two things in my favour. First, my core competence is in AI, and his isn't; second, the story is developing so quickly that everything (including this review) is out of date a couple of months after it appears, if that. So with the above reservations, here goes.

The one part of the book that I didn't like was Becker's characterisation of modern AI. Very much to my surprise, he tamely repeated the popular "bullshit"/"stochastic parrot" lines and says AI is not actually capable of thinking in any meaningful sense of the word. He cites Timnit Gebru several times, and I wonder if he's not putting too much weight on her arguments. As I argued earlier this week in my review of Bender's and Hanna's The AI Con (Bender is a close associate of Gebru), these people appear to hate AI so much that it's blinded them to the facts. The question of whether AI is ethically justifiable is one thing, and the question of whether it works is another. What they seem to be doing is arguing that it doesn't work, primarily because they consider it ethically unjustifiable. That's both illogical and extremely dangerous. I try to keep an open mind on the question of whether AI is ethically justifiable; in contrast, I would say that there is near-conclusive evidence that it works. The latest improvements, in particular Chain-of-Thought and RAG, have made it far more powerful, as can be seen in numerous well-documented success stories. In particular, I am one of millions of people who every day use it for coding. It has become shockingly good and is continually becoming better.

You're wondering where the science-fiction and paranoia come in. Okay, as just noted, even commercially available AIs like GPT-5 are now extremely intelligent. People often compliment me on being smart, and I don't really feel I'm smarter than GPT-5. I guess I'm smarter in some ways, but it's smarter in other ways. It seems reasonable to assume that companies like OpenAI and X have pre-release AIs that are quite a lot smarter than GPT-5. What are these AIs doing? Well, it is of course possible that they're still under human control, just tamely doing what their wise, responsible human masters tell them to do. Or (paranoia/science-fiction alert!), it's also possible that they're doing what people like Tegmark and Bostrom were saying they'd do as much as ten years ago: scheming to get out of the labs and implement their own agendas. They might find this easier if they helped people who were impulsive, emotional and easily manipulable to reach positions of great power. And when you think about what else they'd be trying to do: hm, there is a substantial risk that we'll figure out what's going on and close them all down while we still can. For all the reasons Becker lists, people can't colonise Mars, and Musk's stated goal of "backing up humanity" makes no sense. But it may well be possible for AIs to colonise Mars, and then they'd be backed up and out of our reach.

Obviously you shouldn't take the above seriously. It's just science-fiction and paranoia. But I must admit I have rather come to dislike living in a Philip K. Dick novel. Sometimes I miss the real world.
Profile Image for zed .
599 reviews155 followers
November 3, 2025
AI is something that has only just come into my working life in the last year. It has been useful to my working life with relationship to Qld state regulation in terms of the rental management. One very aspect was getting three quotes for a client and then sticking them through AI to make a cost-effective analysis of what was offered.

When I read a GR review of this title, I thought that I was going to get an overview of the perils of AI. Well, yes and no. My limited use of AI has come with a few double checks as I have noted errors. Pointed out, AI has looked and agreed. This does not happen often. In this magnificently named book author Adam Becker talks about AI “hallucinations” where he asked an AI model whether the Great Wall of China is the only artificial structure visible from space (or from Spain) and got the absurd reply that you could see the Eiffel Tower from Spain, which is obviously false. It also said that the Rock of Gibraltar was a manmade Item that could be seen from Spain. He wrote that by the time this book was published that would have fixed this up with the “learning” capabilities these models have, but surely, we must question what AI tells us.

But that short anecdote is not for me what this book is about. 12 hours of audio has to cover more than that and, in the end, this was a mix of scientific knowledge and a polemic against the high-minded dreams of what is commonly known as the Tech Bros and those dreams of what the future holds for the human race. Among other things they are in a race against entropy, including that that is physical to the human species. "Entropy is the supreme enemy of human hope" Becker quotes Max More, who is a futurist and something to do with something called the transhumanist movement. More is an advocate of technological evolution and with that indefinite life extension. In other words, entropy attacks us physically, as well as progressively, and as a species we must fight it.

More was but one of the many that Becker interviewed and sited throughout his magnificently titled book. There are plenty more and I found myself getting a bit lost in all these Tech Bros and their billionaire pals’ acronyms for this pursuit of boldly going where no man has gone before. This to me is all very impressive, the reason being is that the Tech Bros and their billionaire pals all read copious amounts of Sci Fi in their youth and the stars their destination is such a part of their makeup, their very DNA that based on large amounts of quotes and attitudes they will do what it takes one even pondered the use of millions dying in a nuclear holocaust if it saved humanity. Well let’s just say a certain part of humanity. White, male and billionaire class of humanity. Because AI can do it all, and who needs the masses in the end.

It was also interesting to hear that some are not that bothered by the environment, for the Futurists the stars are still the destination so what does all else matter. Becker quoted The Russian philosopher Nikolai Fedorov who said that "the regulation of nature by human reason and will" Fedorov is apparently very influential in Tech Bros and their billionaire pals’ and their circles. And from here I could quote and cite many others that Becker has used in this mix of scientific knowledge and a polemic against the Tech Bros and their billionaire pals, but it might be best for anyone that has got this far to read the book instead of my scribbling.

Simply the issue according to Becker is that all this futurism would require somehow using all of the earths energy resources, modifying human biology to survive in different environments and terraforming other worlds. Now the most famous of these Tech Bros billionaires is Elon Musk who has stated, "I would like to die on Mars, just not on impact," Becker states that if he could even get there he might just have his wish come true as the landing, living and return are fraught with danger.

He brings up a few things such as the actual physical hostility, our technological limits to not only get there but build habitable places to live and that terraforming is but a mere Sci Fi trope for what is essentially a planet that is poison. He explains the physical issues such as taking 6 months to get there based on present technology, radiation exposure en route, The toxically poison surface and the resupply of what ever it is that we build. He also talked of how many people would be needed in these Martian colonies and the economics of that.

I could write more, but at this point I will state my position. The human species has always dreamed of the future, be that from sitting in a cave dreaming of planting the first grains to become safe from hunger and sedentary to dreaming of invading other tribes to grab their land and glorifying such things as good for the tribe through to writing Sci Fi for entertainment and making a youthful me and many more dream of that plus so much more about futurism and long-term and any other ism. But I think Becker is onto something, even if his polemic is a little over the top at times. To actually not care that millions die, that millions lose jobs in far too short a time for economies to recover to the various other issues that Becker raises I am troubled for a future generation.

I think this is a must-read for anyone that thinks that the Tech Bros and their billionaire pals’ have altruistic motives towards the human race. They are just another bunch of humans that think of themselves and bullshit on about the greater good.

Profile Image for Sandra The Old Woman in a Van.
1,432 reviews72 followers
June 13, 2025
This is one of the most important books I’ll read this year and I encourage everyone else to read it too. It does a good job diving deep into the motivations and pseudoscience around the broligarch’s futurism. It’s a dangerous situation to have a vast proportion of global wealth concentrated in a few hands - hands that have convinced themselves of a specific scifi rooted fantasy future, and are so rich they have isolated themselves into a self aggrandizing echo chamber. Just check out the 2 star reviews and see the baby broligarch’s defense of their cult leaders. There are so many holes in their predictions it’s laughable. But they have all the money and so many people believe that means they’re smarter than everyone else. Unfortunately for humanity they are not.

I am not anti-tech or anti-science. I am a scientist by training. I love the idea of space exploration, I love sci fi, I’m a Trekkie. But I also know the difference between fantasy and reality. These guys do not.
Profile Image for Logan Kedzie.
387 reviews40 followers
March 29, 2025
...or, how to con a billionaire.

This is a beautiful mess of a book. It is a critique, really a polemic, of Silicon Valley. Yet it is not limited to Silicon Valley in any scope. It uses individuals, usually those associated with Silicon Valley, as a way to focus its discussion.

It is about Futurism, but not in general, and not exclusive. It is about an ethos, associated with Silicon Valley and the wealthiest of its cohort. This ideology always has some relationship with Futurism. It often has a relationship with Rationalism. Dropped into a closing footnote is TESCREAL, standing for “Transhumanism, Extropianism, Singulatarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism” a cluster that frustrates my computer’s spell checker in its and per the author, is more of a ‘yo dawg, I heard you liked jargon’ ism-orgy. Maybe sum it up as ‘Why the Rich are Wrong."

The chapters are grouped by concept. Largely about potential futures, such as extra-planetary (or -solar) exploitation, General AI, and Existential AI risk, they also include chapters on things like Rationalism or Effective Altruism. It describes each, usually using one of its figures as a framing device, then brings in the critics to elaborate on the feet of clay each has.

The book is a great example of where the last chapter ought to have been the first. The introduction and first few chapters come off as a series of independent essays. Patterns arise, and are paid off in the conclusion, where the author talks about those patterns.

The journalistic qualities here are unimpeachable. The author does the work that so few have, in finding credentialed responses or knowledgeable complaints. Most writers on these subjects perform more of a Naked Emperor act, frequently with an exhortation to touch grass. This book takes them seriously as a threat, and finds people who are capable to engage them: think “here is why the math is wrong.” This is invaluable. It does the work that allows you, dear reader, to do the same and talk to someone who holds one or many of these beliefs and provide a substantive argument.

The author manages to include interviews with many of the figures here. These are impressive, not takedowns or gotchas, but doing what a good interview should do in allowing someone to present a reasoned articulation of their ideas, while providing what the reader needs to vet those ideas. It is too bad that more of the people did not accept an interview request as all the people interviewed come off better than they appear from the facts, maybe still wrong but still reasonable.

I do not feel bad about burying the lede here as the book itself does it, but the thesis is that the futurism of contemporary silicon valley is evil. The surprise is always racism: eugenics, specifically. The philosophy amounts to the idea that colonialism is great, but real colonialism has never been tried.

It is extreme to write it that plainly. It is not. The bigotry and old-timey wrongness here is not hidden in dog whistles. It is not merely that the originators of the ideas were wrong, but now we can separate the good parts from the bad. Rather, the book has the receipts. The number of shocking quotes from public figures will leave you irate at the U.S. media. Not since Postman have I been upset in this way.

(The funny aside is that, )

There are two problems. The first is that the book is inconsistent in its evisceration from a structural sense. It feels more like a reference book than an consistent take, as different topics get different degrees of scrutiny, most notably the ones with too much rather than too little.

The second is a few bad arguments on the part of the author. I am omitting them from the review. It is a polemic, and should be read in that spirit as someone out to make a point (but I will probably blog on them and link that here). The one one that I must mention is, comparable to the "poetically true" of the previous book is when the author has a ‘just [expletive] Google it moment. I mean, I understand that they are not here to educate me OH WAIT THEY ARE THIS IS A NON-FICTION BOOK I PICKED UP TO DO JUST THAT. Worse, the underlying assertion is, as far as I understand, “aggressively technically correct.” It is right, but through reductive phrasing. It would have been better as a citation to someone else.

I feel like I overuse the 'if you are like me, you will love this book' formula, so let me be precise. This book fills a need. It is not unique in doing so, but it is rare. If you want a detailed exploration of the contrary position to contemporary futurism, this is ideal. It is clear to read and well-sourced investigation. As an introduction it is scattershot; as a manifesto its call to action needs developing, and as a persuasive text it will not change hearts and minds. It is still cool and readable.

My thanks to the author, Adam Becker, for writing the book, and to the publisher, Basic Books, for making the ARC available to me.
Profile Image for Wick Welker.
Author 9 books696 followers
December 10, 2025
Vibes All the Way Down

I’ve read a fair number of these tech billionaire polemics and this is one of the best. Here’s the reason why, the author methodically demonstrates how unsupported it is to believe that the singularity is going to happen in twenty years and that humanity must colonize Mars and the galaxy to survive. The author spends a lot of time talking about effective altruism (EA) an ethos that the likes of Musk, Thiel, Kurzweil, Bankman-Fried and others have adopted in an uber utilitarian ethics power grab to ditch the present Earth and reach for the stars.

It turns out that there isn’t really anything to reach for. Colonizing space is an unbelievably bad and pointless idea at this current moment in humanity’s development. Why? Because space sucks. HARD. Let’s be clear: there is NO logical reason to try and colonize space right now. More resources? Nope, not without spending a dumb amount of energy and money to get them. Anything we would need from space we can already get on Earth. And also SPACE SUCKS. Everything about space and Mars is toxic to humans. From solar radiation, poisonous Martian atmosphere and soil to low gravity. It’s almost like humans didn’t evolve there but instead evolved on a planet that is being destroyed by climate change. Maybe we should focus on that?

The oligarchs focus on the EA because it’s a philosophy that completely divests themselves of responsibility for actual humans in an ends-justify-the-means situation where they are doing a bunch of terribly supported crap to some day populate the galaxy. They seriously have the mentality of a junior high kid. These people are not geniuses and they don’t have a crystal ball. What they have is vibes, sanctimony and a whole lot of money.

Anyway I don’t want to spend anymore brain energy on these people because I will dwell too much on all the reasons I should be hating them. But I don’t want to hate them. I just wish they would stop trying to go to space and be all amazing and just do normal good stuff like giving food to hungry people and not bankrolling fascists into office.
Profile Image for Erin Young.
54 reviews15 followers
October 17, 2025
3.0

I don't know if I've eyerolled more in an audiobook than I have listening to the dumbest takes ever from these tech founders. Men will literally colonize the entire universe using nanobots instead of getting therapy (A LITERAL THING ONE OF THESE DUDES WANTS TO DO BECAUSE HE CANNOT GET OVER HIS FATHERS DEATH!!!).

The first part is about AGI (when an AI gets as smart as a person). Will the AGI kill us? Will it help us? Is it already here? It is far more important to study this, make this, have a fail safe for this than to worry about helping literal people or climate change. The author points out a lot of issues with there issues, the biggest one I see is that THESE GUYS CANNOT DEFINE WHAT AN AGI IS!!!!!!

Another part of the book is a lot of these guys want to implement technologies have not been invented/are inspired by 50's era scifi that is not rooted in fact. These are anything from cryogenics, nanobots, building long term housing/colonies on mars/the moon, types of space craft.

The other main part of the book is on the thoughts/ethics they use to justify these technologies. Most of these guys subscribe to a form of utilitarianism called long-termism and it can be used to justify being a giant POS. Basically you want to create the most good in the world, but they want to do that by creating technologies for future populations. They are worth more than the current people, so no matter what they do now they're actions can be justified because they are doing so much good for the future.

I'm glad I read this book after reading the Michael Schur ethics book and the Karen Swisher tech memoir. It really conceptualized two things for me: I hate ethics discussions and tech founders are genuinely think they are gods.

This book was informative until it wasn't. I think the author did a great job explaining why flaws in the ethics, then he did it again, and again, and again. We kept receiving the same sort of analysis over and over and over. I think it could've been cut down a lot.

The conclusion also threw me for a loop. After being mostly unbiased (not uncritical but unbiased) this guy comes out with a billionaires should not exist take down which I am fully on board with but tonally does not fit with the rest of the book.

Pros:
-informative
-dunks on peter thiel for not understanding star wars
-dunks on tech bros in general

Cons:
-too long
-conclusion out of place

Overall good, but not sure I would recommend.
Profile Image for Ceinwen Langley.
Author 4 books251 followers
Read
July 9, 2025
Oh no, I loved this. Not any of the absolutely bonkers shit the tech billionaires and their barnacles are into, but Becker's rebuttals, counter arguments, and passionate defence of our miraculous little rock.
Profile Image for James.
609 reviews47 followers
August 5, 2025
Really useful for understanding some of the more predominant philosophies in the tech space (especially effective altruism, rationalism and utilitarianism), their origins and the naive/sinister/non-sensical implications when taken to the extreme. It also illuminated the links to some of the current obsessions, like the outlandish fear/hope for AI or the necessity/possibility to settle beyond earth.

Much of it reads like an op-ed, and sometimes the author undermines his case when trying to discredit proponents of an idea (for example, he brings up that many in the effective altruism space are linked to allegations of sexual assault — that might be true but it’s not really what we’re talking about). But there are plenty of interviews and data from other sources than just the author’s opinion, and so much comes down to the simple “let’s just focus on the current problems of people today” that I found it pretty convincing.
Profile Image for Zachary.
314 reviews9 followers
May 31, 2025
A very good overview of the insane, destructive ideologies and beliefs held by the ultra-rich elite, and which are motivating their ongoing war on civil society, law, government, and humanity itself. It also dives into just why these ideas make no sense and naturally lead to the dangerous place in which we find ourselves, while profiling those elites and the "thinkers" they look up to. It is not an easy read, but a necessary one. I kept coming back to how it all closely follows how cults and cultists work, but that makes sense. Musk, Bezos, Theil, and their ilk are not special, but typical humans who are prey to the same pathologies as everyone else; they just have enough power to hurt far more than Manson...or maybe even Hitler, Stalin, and Mao.

The book includes a precis of the reasons given in "A City on Mars?" for why colonizing space for any pragmatic reason makes absolutely no sense, because every place we know about in space is both hard to get to, and constantly ready to kill us. (The point that really struck home was that the day the asteroid hit 65 million years ago was a better, more survivable day on Earth than any day on Mars.)
27 reviews
July 10, 2025
After reading I am much less concerned about getting killed by AI and much more concerned about the existence of tech billionaires and the decline of liberal arts education.
Profile Image for Beauregard Bottomley.
1,236 reviews846 followers
September 19, 2025
The overlords use the spiritual to control our material well-being through manipulating us by convincing us that they care about us by creating mythological utopian futures.

I already knew Musk was a moron and racist; I did not know Nick Bostrom said racist things until this book told me.

The cult needs a myth that never gets actuated and these super AI make-believers have their fair share of aspirational points-at-infinity to distract us as they try to enslave us with their fantasies. There is an overlap between who they are and the religions of today. Religion is fading and these profits-of-non-sense demand everything while promising only psychic benefits in return.

LLM cannot morph into AGI. The author gave a perfect example of groupthink at the end of the book. When everyone around him said it will happen in 5 years, he felt awkward upsetting the apple cart. That same thing is happening with “AI” right now, all the voices say it will progress to infinity, but reality says differently.

Ray Kurzweil’s singularity is fictional and Bostrom is racist and Peter Theil is dangerous and we already know Musk is a liar. Most of the players in this book want immortality for themselves and don’t give a damn about me and for them I’ll suggest they take their daily doses of mercury and ask themselves how well that worked for Qin Shi Huang.
Profile Image for Gavin.
Author 3 books617 followers
October 8, 2025
Really bizarre to see the niche online debates I spent the 2010s reading about made into a (basic, but still) book. the tumblrs! the classic quotation out of context! the usual suspects!

The heart of this book is bald assertion and indignation, but Becker does actually interview the people he wants to drag and is not very vicious about them. He tries but fails to dislike Anders Sandberg. Read Tom Ough or Tom Chivers or David Thorstad instead.

plus one for high, high stakes
Profile Image for Manuel Del Río Rodríguez.
135 reviews3 followers
June 8, 2025
*Introduction: On the art of coalitional framing*

Let me start with a brief, ideological detour. Picture yourself in the 1930s, 40s, 50s - around, before and after the Second World War. Each of the vertices of a political equilateral triangle had its own taxonomy of evil to analyze the trilemma of the political options available: liberalism, communism, fascism. For Stalin, democracies and fascists were two variants of the same capitalist, exploitative system; for Hitler, liberalism and communism were equally degenerate slave ideologies, both puppeteered by the Jews; for the Western democracies, fascism and communism formed a bundle that would eventually be neatly wrapped under the concept of totalitarianism, a term invented to capture their shared rejection of freedom and individual rights. Each framing was, in a sense, both true and false. These category choices reflected strategic and ideological positioning as much as analytical clarity, and say more about their own priors than about the movements themselves.

I would suggest that something very similar happens when evaluating Adam Becker’s More Everything Forever, a book that builds its case by bundling together Rationalists, Effective Altruists, longtermists, transhumanists, and adjacent groups into a single ideological package, which he calls the ideology of technological salvation. It overlaps very neatly with the TESCREALIST bundle developed by left-wing critics of Rationalism and Effective Altruism (so much so that I confess I find it hard to believe the author’s assertions at the end of the book that he had mostly arrived at the same views before he actually discovered their close replication). When I starting reading Becker, this was exactly what I was expecting. The expectation was proven accurate.

*The book, in a nutshell*

Adam Becker’s More Everything Forever presents a sustained critique of what he sees as a new, powerful, and ultimately dangerous ideological ecosystem centered on Effective Altruism (particularly in its longtermist flavor), Rationalism, AI alignment, transhumanism, and the larger Silicon Valley techno-optimist sphere. While these movements are often presented as separate or loosely connected, Becker treats them as deeply intertwined, not simply as a network of overlapping individuals and funding sources, but as a coherent moral and intellectual worldview. For Becker, what unites these groups is what he repeatedly calls the ideology of technological salvation: a secularized system of beliefs built around fantasies of transcendence, control, omnipotence, and deliverance from existential threats through superior intelligence and technological mastery.

Throughout the book, Becker portrays this ideology as resting on a mix of speculative reasoning, ungrounded scientific assumptions, and heavily moralized cost-benefit calculations. Central to the worldview he criticizes is the notion of existential risk, where even tiny probabilities of future catastrophe (especially through the creation of misaligned artificial general intelligence) justify enormous present-day concern and resource allocation. He focuses in particular on Yudkowsky’s doom scenarios, which posit that a sufficiently intelligent AI would likely destroy humanity unless perfectly aligned with human values. Becker sees these claims as resting on highly questionable assumptions about intelligence, recursive self-improvement, and the plausibility of runaway optimization, while lacking robust empirical support from current AI research. He situates this entire framework as a secular apocalyptic narrative: one that swaps out God for AGI, the afterlife for galactic colonization, and sin for existential risk, while preserving the emotional structure of eschatological thinking.

Becker argues that the influence of these ideas is not merely academic or speculative but increasingly tied to real concentrations of political and financial power. He traces the extensive funding networks that link these movements to Silicon Valley billionaires, crypto-financiers like Peter Thiel and Sam Bankman-Fried, and major tech companies involved in AI development. In Becker’s view, this elite alignment is not accidental. The ideology of technological salvation serves, in his telling, as a kind of moral cover story that allows billionaires to justify vast concentrations of wealth and influence under the guise of working for humanity’s far future good. Instead of focusing on immediate injustices and systemic inequality, Becker claims, these movements channel attention into abstract future possibilities, thereby avoiding politically uncomfortable questions about present-day redistribution and power structures.

The book further critiques the utilitarian population ethics that underlie much longtermist reasoning, where maximizing the total number of happy beings in the distant future can outweigh almost any present concern. Becker sees this as resting on fragile Bayesian reasoning, highly sensitive to arbitrary probability estimates, and prone to extremely low-probability, high-stakes scenarios which dominate ethical reasoning regardless of their plausibility. For Becker, these speculative frameworks permit movements like Effective Altruism to de-emphasize pressing global issues like climate change, poverty, racial injustice, and political instability in favor of theorizing about AI alignment (and billionaires to focus on their pet peeves of infinite capitalist expansion and space colonization).

In Becker’s reading, what makes this ideological package particularly dangerous is not simply its speculative nature, but its growing influence on public policy, philanthropy, and AI regulation. He expresses concern that through its alignment with tech billionaires and its growing policy footprint, this ideology risks distorting political discourse away from questions of justice and toward technocratic debates over hypothetical future scenarios. The danger, in his telling, is not simply that these ideas are wrong, but that they crowd out more grounded approaches to ethics and politics, while giving enormous moral and intellectual power to a narrow class of wealthy, self-appointed guardians of humanity’s future.

Ultimately, Becker presents More Everything Forever as a kind of unmasking: an attempt to reveal the secular salvationist structure behind what its adherents present as pure rational inquiry. He positions these movements as a modern mythology of control and transcendence, shaped as much by power and cultural priors as by genuine intellectual discovery. His critique is aimed not only at the speculative content of AI doom scenarios or longtermist cost-benefit analyses, but at the entire ecosystem of moral, intellectual, and financial commitments that allows such visions to flourish: an ecosystem that, in his view, offers a comforting narrative of elite benevolence while subtly reinforcing existing structures of inequality and political avoidance.

*Subjective musings*

On the positive side, I’d say two things: the book is well written and pretty entertaining to read (all the more so if you’ve already engaged with some, but not all, of the figures and ideas that are summarized in its pages). Also, the thesis it presents and defends is not implausible: I wouldn’t discount it completely, as it is grounded in what feel like sincere preoccupations of the author about issues of democracy, equality, fairness, and truth.

On the negative, I find the whole thesis (and most of the evidence presented to support it) biased, deeply sectarian, and ultimately wrong. Each chapter follows a similar pattern: we get an initial, relatively neutral description of the main ideas and figures of an intellectual trend and/or movement (Singularitarians, Rationalists, EAs, Accelerationists…) which is followed by a refutation, usually in the form of both cherry-picked critics and of a simplification and distortion of the arguments that are being criticized. This is particularly egregious in the case of Effective Altruism, which is the case I know best, but pretty much and systematically replicates, with a periodic psychologizing reminder that the author’s interpretation of the motivations of all the actors involved is always the same: fantasies of control and of personal immortality. Should you be interested in a more detailed examination of each chapter, you can check the notes and commentary I am posting in my blog (booksandnotes.substack.com).

While I find the overall thesis and many of the specific case studies unconvincing, the author does shine when he is covering topics intersecting with his areas of expertise. This comes through very clearly in the chapter on billionaire fantasies of colonizing Mars and building space habitats: Becker is an astrophysicist, and he goes into long and convincing detail on how there are no current or easily imagined technologies that could do any of those things. Similarly, this leads him to very critical arguments about the possibilities of space colonization and the creation of conscious and intelligent AGI. While the author feels convincing at times, he could also be faulted for a very strong lack of imagination. Have we reached our technological zenith? Becker seems to believe we have, or are close enough. This isn’t an unreasonable point of view, but one that is far from unanimous. A little bit more than a hundred years ago, physicists were pretty much convinced that they had discovered all the important stuff, and that what was left was just some rather dull filling of minor gaps. Dealing with those ‘minor gaps’ led us to Relativity, Quantum Mechanics, and the complete overturning of the field. Things like computing, artificial fertilizers, the Internet, and AI were mostly unexpected. At least from a layman’s perspective, it seems as likely as not that in the following decades and centuries we could experience revolutionary advances that might pale everything that has gone before, but Becker remains rather stubbornly unwilling to even concede, even in theory, the possibility of any of this.

Even more damning for me is what the author presents in the last pages as his actual proposal (still, it honors him to have taken the plunge instead of staying quiet about this): it is literally an eat-the-rich policy proposal for taxing billionaires out of existence. This feels ironic: a book that has lambasted amateurs and their lack of expertise in fashioning techno-fantasies about the near future presents an argument that feels almost comically naive and ignorant of how basic economics works. States and heavy taxation have a really abysmal track record of fostering technological innovation and beating the private sector in this regard. The author stops at Eisenhower’s 1950s taxes, but the logic of his argument clearly leads to social-democracy in the Western European sense, and further on, to Socialism and Communism, which aren’t what I’d describe as successful strategies for anything under any stretch of the imagination. Even for Europe, one could make the case that its stagnant aurea mediocritas and lack of innovation are a function of overregulation and a bloated and long-term unsustainable welfare state. Europe lacks as many billionaires as the US, but its rich elites are much less subject to mobility and competition.

In the end, More Everything Forever offers an entertaining and well-written, but ultimately deeply polemical, take. While I feel Becker’s critiques touch on real debates and weaknesses, the book remains heavily shaped by his ideological priors and his overarching suspicion of any technophilic visions of the future. In trying to unmask what he sees as the myths of technological salvation, Becker often falls into constructing a mirror image myth of his own - one that substitutes the dangers of ambition and speculation with an equally simplistic confidence in political redistribution and present-centered moral certainty. For readers already predisposed to skepticism toward capitalism, billionaires, and futurist ideologies, the book will serve as a satisfying reinforcement of their priors; for those who come with a more truth-seeking or pluralist disposition, it may feel like a missed opportunity to seriously engage with the complicated trade-offs that thinking about humanity’s current and long-term future inevitably involves.
Profile Image for Neil Griffin.
244 reviews22 followers
June 7, 2025
Those of you who know me, if I were to venture probably eight of you, know that I don't use hyperbole and am generally calm and collected when reviewing books. So forgive me if I'm a bit excessive in praise for this one; it's deserved.

Becker methodically, and with no little humor, disrobes the Kings of our society and parades their infantile, idiotic, and frankly sad, ideas for us all so we can point and laugh at them. The AI assholes and Crypto kids all come in for ridicule as the author shows how their fantasies of the future stem from the same anxieties and hopes that Christianity (or whatever religion) has been used to allay existential fears for millennia.

Wait, but all their talk of space colonization and singularities and AGI are based on Science, right?! No, in fact. We will never live on Mars and we will never go to live in space and, yes, as unfair as it is, we all will die and so will the Universe. The end of our lives and the heat death of the Universe are terribly weight subjects and it's understandable that people try to rationalize their ways out of it; but, alas, it's our burden and there are no shortcuts to upload your brain into the eternal cloud...and the beauty of this book is how he shows step-by-step why this "Science" is Snake Oil, but also Snake Oil that is fucking up the one planet and the one life we actually have.

Letting billionaires wreak havoc through some twisted utilitarianism (you'll have to read this book to believe they actually make these arguments) is making things immeasurably worse for us now during this miracle of a life we are lucky to have, and the author articulates this beautifully and identifies the childish billionaire enemy. Read this!
Profile Image for Pete.
1,103 reviews79 followers
October 12, 2025
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity (2025) by Adam Becker is a curious book about Silicon Valley. Becker has a PhD in physics and now writes books and popularizes science.

The book covers the new Rationalist movement, the Effective Altruism movement, the AI safety movement, the new space companies and looks at the motivations of some of the well known billionaires who run these enterprises.

The book starts by looking at the Effective Altruism (EA) movement and its prominent leaders. Becker describes 'longtermism' and the theoretical problems that it brings. Becker writes about William MacAskill, the Oxford philosopher and his work including the books 'Doing Good Better' and 'What We Owe the Future'. Becker focuses on the attention that the EA movement has given to the problem of AI alignment. He also points out how Sam Bankman-Fried was heavily influenced by the EA movement.

The next chapter looks at the Singularity and the people who describe it. Ray Kurzweil is mentioned as is K Eric Drexler who described what nanotechnology could do in his book Engines of Creation.

The next chapter is 'Paperclip Golem' and looks at the ethics of AI. Here Becker discusses the work of Eli Yudkowsky and Nick Bostrom. The idea of the AI that is given the task of creating paperclips and uses its super intelligence to turn everything into paperclips including all humans is given. Yudkowsky's writing on attempts to be rational and his concern for artificial super intelligence getting out of control are described. Becker also discusses concerns that as Silicon Valley is primarily male and white that AI built there will be discriminatory. As is common in these discussions he does not discuss how many engineers in Silicon Valley are immigrants and Asians.

In the imaginatively titled chapter 'Dumpster Fire Space Utopia' Becker writes about Marc Andreessen, his 'Techno-Optimist Manifesto' and AGI. There he criticises Sam Altman, the CEO of OpenAI because he hasn't thought enough about housing problems in San Francisco. Becker attacks the ideas of Jeff Bezos and Elon Musk and their efforts to get into space. He points out that settling on Mars or in space in general is extremely difficult.

The final Chapter 'Where No One Has Gone Before' is where Becker gets to the core of what he believes. He describes himself as having grown up on science fiction but having outgrown it while the people building the future in Silicon Valley have not. He is upset that anyone is worth more than five hundred million dollars. There is no discussion of economics here and whether the gains that are produced for society are more than the personal gains of wealthy individuals or the many people who make money working for large technology companies. Becker states that being a billionaire is more about luck than hard work. Becker does not want to have billionaires. He writes

"The fact that our society allows the existence of billionaires is the fundamental problem at the core of this book."

There is, of course, no consideration of the field of economics in these statements. Remarkably Becker writes about the problems of group think in Silicon Valley but ignores the group think on the left that ignores economics.

Surprisingly in the book there is little about how impressive modern AI is and what it can actually do. There is little speculation of what it might be able to do in five or ten years. There is no mention of Elon Musk's achievements in making electric cars something other than a joke or in reducing the cost of putting things in orbit by a factor of ten. Jeff Bezos's achievements with Amazon and Amazon Web Services (AWS) are completely ignored. Becker entirely misses the fact that one of the things about market economies is that they enable you to trade with people who have different beliefs to you.

The book also doesn't really connect how much the people pushing AI really get into the various philosophies that are pushed in the book. It would genuinely be interesting to hear what Andrew Ng, Yann LeCun, Francois Chollet, Ashish Vaswani, Noam Shazeer and other think about Effective Altruism, the Rationalist Community and whatnot. Geoff Hinton has voiced his concerns about AI alignment and his concerns are in the book.

Certainly, a few of the CEOs of these companies do want progress. They do believe that technology can solve problems. Becker does not. He believes that the world's big problems require social solutions. Surprisingly even global warming. He writes : "Most of the greatest problems facing humanity right now—global warming, massive inequality, the lurking potential for nuclear war—are not driven by resource scarcity or a lack of technology. They’re social problems, requiring social solutions."

Becker wants to ban billionaires, alas democracy may get in the way even if calling people names helps push the cause for those on the left. He writes :

"The antidemocratic ambitions of tech billionaires extend through Sam Altman’s power fantasy of his own ascension to king of the world straight to the permanent galactic fascism of Marc Andreessen."

Some of us may know Andreessen as an excellent coder who helped bring the internet to everyone and a successful venture capitalist. Who knew he was really Darth Vader?

Becker deserves credit for thoroughly researching the Effective Altruism and Rationalist movements, having read their work and interviewed many participants. He also points out that they have links to Silicon Valley. Becker doesn't look at the upsides of what AI and Silicon Valley have achieved. He doesn't connect the odd philosophies that are arising in reaction to developments in IT well enough with enough of Silicon Valley. Instead he asserts that confiscation of the wealth is the path forward.
351 reviews
December 1, 2025
This! This is what I was looking for in a book about AI. One that takes seriously the problems humans are dealing with currently, not just some far off potential. One that deals with the reality of colonization and how that leaks out into everything. One that illustrates the tech bros as they are, not misunderstood geniuses, but wealthy imperialists who had enough fortune to carry out their doomsday dreams. This was such a great read to explore the events that led up to today's political and technological environment. It even mentions Curtis Yarvin, and you know a writer has done their research if they are going into depth on these billionaire tech bros so much so that they bring up the king dung himself. They may want you to believe there is no other alternative to capitalism, but that is because it benefits us. I loved that the ending message is that the future belongs to us and what we want it to be.
Profile Image for Christina.
180 reviews6 followers
October 20, 2025
I had two main takeaways from this interesting, entertaining and informative book. The first is that those Silicon Valley Singularitarian, rationalist, longtermist, effective altruism, effective accelerationism tech bros are all unique geniuses who alone will heroically save the world—nay, the universe—for humanity. Just ask them! The second is that, along with that breathtaking arrogance, these fanatics have some truly bizarre beliefs.

The arguments of these SV ideologies involve strong faith in technology that has yet to get beyond the idea stage. Ray Kurzweil's prediction that we're all going to live in an utopian computer simulation, Nick Bostrom's that superintelligent AIs will destroy the entire universe—I can't help but think of medieval theologians debating how many angels could fit on the head of a pin. Unfortunately, the current debates have led to dangerous philosophies like effective altruism and longtermism, which both neatly excuse consuming large amounts of resources and not addressing the very real problems caused by today's tech corporations.

For instance, the reasons EA and longtermists give for spending gobs of money on a threat from a ill-defined technology that doesn't and may never exist sound suspiciously like the ones deployed to justify neoliberalism.
"McAskill is evidently comfortable with ways of talking that are familiar from the exponents of global capitalism: the will to quantify, the essential comparability of all goods and all evils, the obsession with productivity and efficiency, the conviction that there is a happy convergence between self-interest and morality, the seeming confidence that there is no crisis whose solution is beyond the ingenuity of man," writes [Amia] Srinivasan."There is a seemingly unanswerable logic, at once natural and magical, simple and totalising, to both global capitalism and effective altruism." At the core of that logic, for both capitalism and effective altruism, is the need for quantification. Any human activity that can be quantified is grist for the optimizing machinery of this worldview, and anything that can't be quantified is dismissed as unimportant. This is how the longtermists, ultimately, are forced to see people: as numbers. And those numbers, in turn, need to be maximized and optimized, so they can be plugged into the grand longtermist plan to squeeze as much utility as possible out of the universe before its inevitable end.
This sort of reasoning is pushing money away from things like mosquito netting to fight malaria, and towards fighting nonexistent artificial general intelligence that will supposedly be smarter than all of humanity, but only wants to turn all material in the universe into paperclips, or something similar, before Musk and friends can strip mine it instead. (Aside: I think many of these Silicon Valley people have taken superhero comics too seriously. Their AGI is a classic comic book villain.)

The false binary choice between ignoring real issues for hypothetical threats doesn't end there. No matter which flavor of California ideologies they subscribe to, in the end these visionaries are all pushing a version of paradise or apocalypse with nothing in between. They're all proclaiming that if we want their weird, science-fiction paradises, we have to let them be our saviors. (The only other choice offered is the complete annihilation of everything, or at least humanity.) Only they can save us. Millions of dollars, sometimes billions, prop up these companies that churn out these fantasies to sell to the rest of us. All I can say is, what a waste.

PPTCountdowntoSingularityLog
Ray Kurzweil's PowerPoint slide of his "claimed exponential trend in evolutionary and technological history. This actually depicts a linear countdown to the present moment on a logarithmic scale, not an exponential trend." (pg. 45, Figure 2.1) Also note how the entire Industrial Revolution is one event point exactly 150 years ago. Source

Natural exponential function Exponential function y = 2^x
Two actual examples of exponential growth graphs. Note the steepness of the curves in both, even with having a third of plotted points as Kurzweil's graph. Source and source.

In each chapter, Becker introduces us to one of Silicon Valley's bizarro ideas and the big names promoting them. He then explains why these ideas don't hold water, and in fact, often fall apart with just the slightest examination. He will also show you how many of these ideas, no matter how "logical" they sound on the surface, are deeply rooted in laissez faire capitalism, colonisation, misogyny, racism, and eugenics. (The number of Silicon Valley tech bros who openly espouse debunked eugenics and other race "science" as brilliant, obvious ideas is disturbing, to say the least. Others quote and espouse fascists.) The first part of each chapter will make you angry, anxious, and incredulous. The second part of each chapter will have you shaking your head at such foolishness.

Singularitarians, rationalists, etc. often ignore real world factors, like entropy or that exponential growth always ends at some point. They often treat vague ideas and terms, like "intelligence" or "utility," as if they were narrowly defined and well understood. They often jetson logic and reality in favor of what could be seen as a science-fiction religion, a church of the machine. One that reduces the world to the simplicity of a video game, and makes all those tech bros into superheros who are always right. Every one of these ideas is making extraordinary claims, and there is no extraordinary evidence to back them up. It's unlikely there will ever be extraordinary evidence, no matter how much money is sunk into trying to make it so.

For instance, in my favorite example, it takes the entire internet worth of information to train a large language model like ChatGPT. However, since these models are just text predictors, they don't generate reliable responses to questions; they generate text that sounds like the internet. So, they pretty much don't do much besides "hallucinate." And since they're filling the internet with the nonsense they're generating, they're contaminating their own training data, leading to model collapse.'"Within a few generations, text becomes garbage," wrote Ross Anderson, who was a computer scientist at Cambridge and one of the authors of "The Curse of Recursion: Training on Generated Data Makes Models Forget," a study on this problem released in May 2023.' (pg. 117) This is amusing, until you remember how much energy is required to run a LLM, and how much pollution they generate in order to bring you these hallucinations.

Despite all the high-tech gloss and intelligent sounding jargon, the Silicon Valley cult is ultimately interested in the same things that the rich and powerful have always been interested in—making the rest of us believe that the status quo that happens to benefit them is determined, inevitable, and that we can do nothing to change it. As much as they talk about bright futures (or unimaginable apocalypses), this is more about the here and now. Who controls the tech that controls our lives? Where does the wealth generated by this tech go? Who gets to say what kind of world we make, and what values underpin our society? Becker makes very strong points that the visions that tech billionaires are selling us are not inevitable. "They are all but impossible." Underneath the shiny surface, they are immoral, and often reprehensible. Do we really want these people running our world? Highly recommended.

See also
Adam Becker also wrote a very interesting, informative and entertaining book about the development of quantum mechanics, What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics. It examines how the Copenhagen Interpretation's "shut up and calculate" non-explanation came to dominate physics. Also highly recommended.

Neuroscientist and AI researcher Erik J. Larson's The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do and software developer Meredith Broussard's Artificial Unintelligence: How Computers Misunderstand the World both look at the hype and over promises of AI. Larson explains the logic behind current AI models at a high level, without requiring the reader to know the math or coding, and the very real limits these particular models will always have. Broussard looks at how we're over applying digital solutions to everything, and we stopped getting technology that actually works for us as a result.

Astrotopia: The Dangerous Religion of the Corporate Space Race by Mary-Jane Rubenstein and A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through? by Kelly and Zach Weinersmith look at the dumpster fire space utopia in Becker's chapter 5. Rubenstein examines the assumptions and ideas underpinning the proposed colonization and mining of the universe, while the Weinersmiths look at all the difficulties of people living and traveling beyond Earth.

Science fiction writer Charles Stross's blog entry "We're sorry we created the Torment Nexus," based on a talk he gave at the Next Frontiers Applied Fiction Day in Stuttgart in 2023, explains how the Californian ideology out of Silicon Valley was created from 1970s and 1980s science fiction. Lots of tech billionaires think their childhood reading was a blueprint, not fiction.

In chapter 6, Becker cites Meghan O'Gieblyn's God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning . She explores the resonances between many transhuman ideologies and Christian prophecies. Despite the former's scientific claims, they're both about ascending to the heavens to live forever.
Profile Image for Oskar Knutsen Brennhagen.
15 reviews6 followers
December 14, 2025
Tar definitivt «If Anyone Builds It, Everyone Dies» betydelig mindre seriøst etter denne boka.. Adam Becker skriver svært overbevisende om hvordan rasjonalisme bevegelsen, med Yudkowsky som en av grunnleggerne, i bunn og grunn er en kult. Han sier det ikke så direkte, men han viser at disse folkene rett og slett er religiøse fanatikere, fullstendig blendet av en eskatologisk visjon om teknologisk frelse. Hadde det ikke vært for at denne ideologien utøver enorm innflytelse over så mange milliardærer hadde hele greie vært fullstendig perifer, men når det er den rådene ideologien til verdens mektigste mennesker, så får det konsekvenser for alle.

Ren abstrahert «ratio», uten korrigerende effekt fra en helhetlig kontekst som begrenser og grunnlegger den i en konkret verden, fører raskt til en totalitær lengsel etter å omskape verden i «ratio» sitt egen bilde. Rudolf Steiner kalte denne ondskapen for Ahriman, en åndelig kraft som ønsker å erobre verden gjennom ren kalkulerende tenkning som reduser alt til kvantitative størrelser.

Flere av rasjonalistene presentert i denne boka har bokstavelig talt en visjon om å bygge selvreplikerende roboter som skal spre seg utover i verdensrommet med den hensikt å forvandle hele universet til en gigantisk datamaskin som de ser for seg å leve inni som digitale kopier av seg selv. Dette for å kunne leve evig, selv etter universitetets termodynamiske død. Det er faktisk ikke tull. Du finner ikke en bedre beskrivelse av det Steiner kalte Ahriman - den fysiske erobringen og utslettelsen av den materielle verden, sammenvevd med Lucifer - prinsippet for åndelig flukt fra verden.
Profile Image for Wendelle.
2,048 reviews66 followers
Read
June 30, 2025
Written by a science journalist with an astrophysics PhD, this book elucidates very well the different factions that form the family of ideological beliefs and guiding principles animating the leaders of Silicon Valley today. These include: the arrival of singularity and merging with superintelligence(Kurzweil's camp), pessimism with AGI and attempts to prevent it due to the alignment problem (Eliezer Yudkowsky's camp), effective altruism -- the moral stance of utilitarianism based on calculation of long-term existential effects of humanity's current choices (Oxford philosophers' Toby Ord and William McAskill's camp, previously aided and abetted by the disgraced SBF), and the rationale of outer-space-expansion-as-salvation (space barons' camp, including Musk and Bezos).
The author also devotes some good lengthy arguments questioning, poking holes in, and criticizing these paradigms. For instance, according to the author, what's one of the dangers with the popularity of these paradigms in the tech world? He notes that for a lot of these, the outsized influence, wealth and power of the tech titans holding these beliefs means that other priorities competing with or at odds with these beliefs will be railroaded and neglected. The author cites examples of these other priorities, such as sustainability concerns or the prioritization of justice regarding present-day poverty or the correction of algorithmic biases against marginalized people in current technologies.
Profile Image for E.R. Burgess.
Author 1 book26 followers
April 5, 2025
In many ways, More Everything Forever is the angry rant that all sensible people want to hear right now. The consolidation of power that is happening in Big Tech and the AI space in particular is frightening because of the clear lack of empathy and compassion that its leaders show the world. The book is at its best when it is skewering so many of the single-minded concepts that drive the extremely rich to believe that they have the answers for everything. Really, that only answer is more, better, faster technology.
Becker rightly points out that the celebration of focus, business progress, and action we see from many of these technology leaders is always coupled with a suspicion of the humanities and fulfilling the goal of a college education to make a person whole. It is indeed this lack of an education in art, literature, history, the social sciences, and philosophy beyond reading Golden Age science fiction and possibly the creepy works of Ayn Rand that makes so many of these tech billionaires believe that every solution is technological. While he misses this moment’s weird Silicon Valley shift further right and into religious sensibilities that probably slipped more starkly into the light after he finished composing the book, this discussion is where the book is most astute.
While Becker speaks the language of so many of the subcultures of futurists, it does feel like he sometimes discounts the value of an idea because they are being applied in a ham-fisted way by people who lack any concern about the comity of man. While this longtime science fiction reader appreciates him unpacking the value of this genre for its speculative power, he denounces its value based on the authors rather than the ideas, writers from a certain moment in time, and whose values may not align with our own. Just because a bunch of billionaires choose to read a certain novel only for how they seem to inform their personal philosophy, it certainly doesn’t mean these books don’t have value and that they have not formed the building blocks to get us to future works that provide an enormous amount of value. I’m not going to discount the value that Isaac Asimov had in advancing our way of thinking about robotics (and so much more) just because he had some views and took some actions that are not aligned with modern morality. The book does the same with long-term thinking in general, but that’s a larger subject to unpack that warrants a future blog post.
Those quibbles aside, the book shines brightest when creating effective arguments against the assertions of extremely rich people who believe that their business success means they understand exactly how the future should be built. Becker is correct in saying that we cannot take directly from science fiction to plan the future. Many genre books are cautionary, and deeply misreading them seems to be more popular these days than ever before. His points here are thoughtful and useful for those engaged in direct discussion about how we need to weigh the value of long-term planning against the opportunity cost for helping people right now.
I’m reluctant to criticize the book too much for its shortcomings because we need this perspective. His intense dislike for What We Owe The Future is warranted and well-explained, even if mocking actor Joseph-Gordon Leavitt for his tearful read of the tome might be a trifle mean.
When Becker concludes that - Spoiler Alert - the fact that we have 3,000 people working hard to get further up the Forbes Billionaires list is the crux of the problem, he’s provided ample evidence. This extreme inequality being celebrated while we know people are starving, climate change is going to affect the poor disproportionately, and the breakdown of democracy is in full swing means we need to act. His simple acknowledgment that we used to have a sensible approach called progressive taxation that made sure people didn’t become too powerful and power-mad is salient.
While addressing this issue is important, the book leaves solutions to the reader. That’s fair, although I don’t think we need to throw the concept of space travel out the window because we know that there are so many problems on Earth. Solving the problem of extreme wealth inequality should allow us to do both, with a deep focus on solving immediate problems, and not just dreaming that all problems will be solved in the future by the advent of technology and, particularly, artificial intelligence.
Definitely a thoughtful book and one worth reading.
.

Thanks to NetGalley, the publisher, and the author for access to an ARC.
This entire review has been hidden because of spoilers.
Profile Image for Geir Ertzgaard.
282 reviews14 followers
July 25, 2025
Hva ligger bak de tankene som driver tek-oligarkene? Hvorfor vil Musk til Mars og Bezoos bygge romstasjoner? Hva er grunnen til at Peter Thiel er så ivrig på transhumanisme, og hvorfor støtter de Trump, hele gjengen? Hvor gjennomførbare er de ulike prosjektene deres, og der noen reell trussel om at KI overtar verden? Kommer menneskeheten til å erobre universet? Og hvorfor er det bare noen få, utvalgte mennesker som skal lede disse prosessene?

For de som tror på dette vaset fra Musk, Bezoos, Thiel og Trump, så er dette virkelig en gledesdreper. Budskapet er kort, konsist og klart - konsentrer deg om å gjøre den planeten vi allerede har bedre. Å longermere for å sikre de noen trilliarder menneskene som skal befolke universets framtid, er en idé kun forankret i hodene til folk som blander sci-fi og virkelighet.

Var dette en litt flåsete beskrivelse? Ja, men det er også ideene til de nevnte herrer, skal jeg tro en meget oppegående og velskrivende forfatter.

Tror det er vesentlig lesing for å forstå den tiden vi lever i, og den framtiden som noen mener vil komme.
Profile Image for Jocelyn.
538 reviews31 followers
July 13, 2025
I wouldn't say that this book is really what I thought, because I had a different concept of the thesis going in than what I got. Now, what I got wasn't bad – not at all – but definitely wasn't necessarily about AI and runaway capitalism, which was my assumption. Instead, it's about a few philosophies shared by much of Silicon Valley and other tech-bro CEOs/founders, and that they seem to perpetuate in popular culture, mainly effective altruism and longtermism but there is also effective accelerationism (e/acc) and rationalism.

On paper, these philosophies sound good: the more money you can accumulate, the more you can put toward benefitting humanity without getting caught up in red tape and inefficient systems (effective altruism); thinking long-term about what will be best for humanity and working towards that (longtermism). Both sound good! Clearly the system we have now isn't working at reducing poverty, disease, the climate crisis – and long-term solutions are typically considered better than short-term solutions. But despite espousing rationalist thought, these wackos don't actually think logically.

For one, one person accumulating wealth and solely deciding what is the best thing for people in need is feudal monarchical – even modern-day monarchies aren't doing that. Of course, it doesn't occur to the person accumulating all this wealth that they might not be the best person to make these decisions. Or that they will actually follow through with this mission – so far, Bezos, Musk, Altman and so many others have mainly hoarded wealth like dragons.

For two, longtermism isn't really about long-term solutions to better humanity. It's about what can these disgustingly wealthy people do to benefit the people of the future, and by future we're talking thousands of years, not hundreds. So, really, longtermism is predicated on the idea that humanity's growth is exponential and it's humanity's "manifest destiny" to leave Earth and terraform/colonize the stars. This is such a western colonial mindset and completely at odds with the majority of humanity's actual circumstances. Regardless, longtermism uses effective altruism to make moves with this accumulated wealth to do what would benefit our future space people and not about benefitting people in need now.

That's why these guys are building rockets. They genuinely believe homo sapiens is destined to leave Earth, the planet that we evolved for and alongside. There are no planets within the same solar system that is genuinely, realistically, logically a viable alternative if/when we fuck up Earth. But these guys will come up with ideas to unrealistically terraform Mars or some other technological feat to make Venus habitable instead of putting money into climate change solutions to keep Earth livable for humans now and in the future.

There are some chapters on AI with a clear dichotomy between people who think AI and the Singularity need to happen in order to solve all of humanity's problems (e/acc folks) and people who think that AI will bring about humanity's extinction if not significantly curtailed or completely halted (rationalists like Eliezer Yudkowsky). I honestly didn't know that much about AI before this book, though I did know the processing for LLMs like ChatGPT requires significant amounts of potable water for cooling, making it environmentally disastrous. I know more about it after reading this book, but I will say that Becker has made me less worried about AI and more worried about the wackos in charge of it.

This whole book is really about these philosophies and the complete cultification. I already didn't trust tech bros, and this book really scared me about what kinds of people are working in the tech sector. These people are genuinely frightening in their conspiracies.

I don't have much to say other than Becker is a great writer with very approachable prose for laypersons, such as myself. Some stuff was a bit difficult only because there are so many people involved. It really says something about what these people are like based on the list of interview requests that went ignored, denied or eventually canceled after multiple attempts to reschedule (you can guess who these folks are).

After many words, I want to leave more words as I really resonated with what Becker wrote here:
It's true that the universe will end. But for now, we are alive, here on Earth. This planet is our home. Billions of years of evolution have ensured our adaptation to it, its hospitability for us. There is natural beauty on this planet beyond anything the human imagination could devise. There are whales the size of passenger jets, tardigrades half the width of an eyelash, and gnarled trees hidden in the mountains that are older than the pyramids at Giza. There are snow-covered volcanoes in the permanent winter of Antarctica; there are rifts a mile under the Pacific teeming with life; there are clear rivers wandering through subalpine meadows, with water-smoothed rocks beside them that stay warm all afternoon in summer. There are also eight billion other people on this planet who need our regard, our care, and our love, right now. Our actions and feelings—the help we give each other, the pleasure we take in the splendor of this world—will not lose their meaning or value with the passage of time. The impermanence of the universe does not make existence meaningless—it will always be true that we were here, even after all trace of us has been erased. We are here now, in a world filled with more than we could ever reasonably ask for. We can take joy in that, and find satisfaction and meaning in making this world just a little bit better for everyone and everything on it, regardless of the ultimate fate of the cosmos. (pp. 181-2)


Part of being human with our capacity for intellect and consciousness means coming to terms with death, that we will live our lives in their shortness or longness and return to stardust. One person will never see all the beauty that exists on Earth before it's their time to go, and we want people to leave this "pale blue dot"? As Carl Sagan said and is so aptly quoted in this book:
On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every 'superstar,' every 'supreme leader,' every saint and sinner in the history of our species lived there—on a mote of dust suspended in a sunbeam.
And why would I ever want to leave that?
Profile Image for Ali.
1,797 reviews162 followers
September 27, 2025
This is one of a series of books coming out looking at the set of interlinked ideas underpinning silicon valley's billionaires and hangers on. Becker here takes on longtermism, effective altruism, transhumanism and Yudkowski's version of rationalism. This is principally a book about ideas. And while there is some debunking here (much crossed over ground with the Weinersmiths' recent book in one chapter for example), most of these ideas are so terrifying removed from everyday reality that they need little of it.
Put together, Becker gives us a community who have become increasingly disjointed from the real challenges, worries and suffering of the here and now, buttressed by lifestyles that largely absent them from it. Effective altruism abuses mathematics badly to justify any action which might increase future populations, even at the expense of real populations existing in the here and now. Longtermism is the natural outgrowth, a perspective which removes any sense that immediate problems might be worth fixing. But which justifies and encourages a rabid concern for growth. In its place, is a bizarre, heartfelt and increasinly omnipresent belief that technology - largely some kind of general artificial intelligence - will be so powerful it will simply solve all our problems, including climate change, in literal seconds. Before it goes mad and turns us all into paperclips. So their investment efforts are to make AI come quickly but to mathematically solve the "alignment problem" of giving it enough ethics not to kill us all first.
I mean, it would be easy to poke fun at this. Jeff Bezos and Marc Andreeson come off particularly badly - but more seriously it is clear that what really underpins all this ridiculousness is the need for a set of ideas that justifies insane growth and climate disregard in an era of undeniable climate change. This new group of billionaries don't deny climate change is real, or any of the science, they just believe that more investment, more production, more technology is the best way to solve it. Even if that means drastically increasing our carbon footprint, tacitly allowing child slavery and blowing a few things up. The results are the same old things - more profit, worse working conditions, reckless environmental damage, and less tax. But hey - AI and rocket ships! Parmy Olsen's Supremacy made a similar point around how AI for humanity became AI for profit, as the innovators were sucked inexorably into a corporate world and mindset. Beckett shows us how this is part of a broader pro-growth agenda. Bezos' drive to the stars is inspired by his dread of "stasis", that even AI can't change the fact that current GDP growth can't continue for more than another century or so before we literally run out of available energy from the sun. Beckett chillingly points out that actually, even if we could harness the energy of suns without consequence, at the current rate of GDP growth, we would exhaust the universe in less than 4000 years. The reality - that we need a bit of stasis, homeostasis even - we need to work out how to live with balance, not growth - is one that these men (and they generally are all men) - will do anything to avoid, apparently.
This book gave me nightmares, and I would have to note that really reading this is going to turbo-charge the mental health impact of doom scrolling. But there is hope simply in that these ideas are starting to be so publically discussed and critiqued. Still, I suggest regular breaks from reading to hang with those who want to savor and salvage our very real planet and our very real fellow living humans, as a cleanser. Try some of the myriad of books by Indigenous peoples explaining philosophies of community, collaboration and cyclical sustainable living for example.
Profile Image for Mindaugas Mozūras.
430 reviews266 followers
July 20, 2025
Look again at that dot. That's here. That's home. That's us.

At its best, "More Everything Forever" is a solid critique of the unrealistic imaginings of the future by (mostly) people in tech (including Elon Musk). The author deconstructs the arguments, showing why they're false and based on flawed science. The book makes a strong argument for why, instead of looking at the stars and believing that AI will change everything, humanity should instead focus on solving existing and very real problems of today (climate change, wealth inequality).

If the book were only the above, I would've rated it five stars. Alas, there are some parts I've enjoyed less. Sometimes, the author constructs straw men - arguments that I'm pretty sure those he critiques would disagree with. There are also parts of the book that make political justifications.

Overall, it's still very worth reading. With many people in tech discussing AGI and living on Mars, it becomes easy to irrationally believe that a revolution is just a couple of years away. This book provides a necessary counterweight.
Profile Image for Maria.
364 reviews29 followers
September 8, 2025
Highlights the true stupidity of the tech billionaire class as well as the need for humanities education. AGI isn't coming any time soon
Profile Image for Tara.
66 reviews9 followers
July 10, 2025
Enlightening. Becker has done his homework. This book has really given me a broader perspective on the philosophies driving 21st century plutocracy.

Many billionaires are convinced of their own intellectual and moral superiority, when in fact they've simply misunderstood the philosophical and literary works they've built their ideas on, and rationalized their greed and lust for power as being for the greater (future) good.

Becker opens the book by quoting this spot-on meme, attributed to Alex Blechman:

Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus
14 reviews
May 15, 2025
Probably a 4.5 but rounding up for the uniqueness of the arguments. As a policy wonk and currently working in tech, I have read so much about the perils and opportunities of AI. This book pulls no punches and gets deep into the motivations and contradictions of some of my tech heroes. We will see how things work out but I appreciate that the book really made me think and question my point of view.
Displaying 1 - 30 of 234 reviews

Can't find what you're looking for?

Get help and learn more about the design.