Jump to ratings and reviews
Rate this book

The Intelligence Explosion: When AI Beats Humans at Everything

Rate this book
"Will the upcoming intelligence explosion be the best or worst thing ever to happen to humanity, and how can we tip the balance? This riveting page-turner by a seasoned tech journalist is packed with exclusive behind-the-scenes insights, and masterfully elucidates the double-edged potential of humanity's greatest gamble."—Dr. Max Tegmark, author of Life 3.0 and Our Mathematical Universe

With the rapid rise of generative artificial intelligence, both existential fears and uncritical enthusiasm for AI systems have surged. In this era of unprecedented technological growth, understanding the profound impacts of AI — both positive and negative — is more crucial than ever.

In The Intelligence Explosion, James Barrat, a leading technology expert, equips readers with the tools to navigate the complex and often chaotic landscape of modern AI. This compelling book dives deep into the challenges posed by generative AI, exposing how tech companies have built systems that are both error-prone and impossible to fully interpret.

Through insightful interviews with AI pioneers, Barrat highlights the unstable trajectory of AI development, showcasing its potential for modest benefits and catastrophic consequences. Bold, eye-opening, and essential, The Intelligence Explosion is a must-read for anyone grappling with the realities of the technological revolution.

336 pages, Hardcover

Published September 2, 2025

37 people are currently reading
4727 people want to read

About the author

James Barrat

7 books123 followers
The Intelligence Explosion by James Barrat cuts through the noise surrounding artificial intelligence, offering a sober analysis where both doomsayers and techno-optimists fall short. As generative AI transforms our world at breakneck speed, Barrat—a veteran technology journalist—provides the critical framework we urgently need to assess its true implications.

Drawing on exclusive interviews with leading AI researchers and ethicists, Barrat reveals how even the architects of today's most powerful systems struggle to understand their creations' capabilities and limitations. The book meticulously documents how major tech companies have repeatedly deployed systems that produce convincing but factually incorrect outputs, amid many other errors, raising profound questions about reliability and control.

What distinguishes this work is Barrat's nuanced exploration of AI's dual nature: its potential to address humanity's greatest challenges alongside its capacity for unprecedented harm. Neither alarmist nor naively optimistic, The Intelligence Explosion delivers the intellectual clarity essential for citizens, policymakers, and technologists navigating this pivotal moment in human history.

PRAISE FOR THE INTELLIGENCE EXPLOSION
“AGI maybe years or even decades (and not just months) away, but James Barrat is right; we are not prepared. And as Barrat says here, in The Intelligence Explosion, "generative AI carries risks that are unlike anything we’ve faced before". The time to act is now.
— Dr. Gary Marcus, NYU Professor Emeritus, and author of Taming Silicon Valley

"James Barrat pulls no punches in his powerful new book The Intelligence Explosion. He deftly explores the many perspectives on the AI tsunami about to crash into humanity. I believe there are solutions to the problems he describes, and I am hopeful his book will help wake up us all to the urgency of taking action. Read the book and share it for the good of humankind, but keep your support network close as it can be quite disturbing."
— Steve Omohundro, Founder/CEO at Possibility Research and Chief Scientist at AI Brain

"A great tension exists currently where people in AI or AGI labs understand that they are putting the rest of society at risk, yet the rest of society doesn’t realize what is going on. James Barrat’s work is important to help address this great disequilibrium."
— Jaan Tallinn, founding engineer of Skype and Kazaa and a co-founder of the Cambridge Centre for the Study of Existential Risk and the Future of Life Institute

"With snappy writing and entertaining anecdotes, The Intelligence Explosion unpacks the powerful cultural narrative that advanced AI poses an existential risk to humanity’s continued dominion over our fragile planet. James Barrat is among those whose investigations lead him to conclude that the threat is real, and to date attempts to control or ensure advanced AI will be safe and benign have yielded no fruit."
— Wendell Wallach, author of A Dangerous Master

“James Barrat’s The Intelligence Explosion offers a comprehensive examination of the potential for recursive self-improvement in artificial intelligence systems. The book methodically outlines scenarios in which AI systems could rapidly exceed human cognitive capacities through iterative enhancements, presenting detailed analyses of the technical challenges and risks associated with such developments. From my perspective as an AI safety researcher, the work provides a rigorous and systematic assessment of both the transformative potential and the significant safety concerns inherent in the pursuit of advanced AI technologies.”
- Professor Roman Yampolskiy, author of AI: Unexplainable, Unpredictable, Uncontrollable

"When the machines outthink us, will they outmaneuver us? Barrat’s The Intelligence Explosion is a wake-up call we can’t afford to ignore."
-Adam A. Ford, Futurologist, Data Architect, M.IT

“Is Big Tech gradually automating away all our jobs? And

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
20 (18%)
4 stars
43 (39%)
3 stars
36 (32%)
2 stars
10 (9%)
1 star
1 (<1%)
Displaying 1 - 30 of 36 reviews
Profile Image for Erin.
3,915 reviews466 followers
October 18, 2025
In September 2024, I read and reviewedSupremacy: AI, ChatGPT, and the Race that Will Change the World and mentioned that I would like to read other books about this topic to further my understanding.

Thanks to NetGalley and St. Martin's Press for access to this title. All opinions expressed are my own.

Enter The Intelligence Explosion, which I will not be gifting to any positively upbeat members of my immediate or extended circle. Because James Barrat wants us all to know that existential dread we feel in the pit of our stomachs when it comes to AI and humankind's future is 100% legitimate, and Arnold Schwarzenegger and Linda Hamilton aren't coming to save us.

We are all doomed. Keep your eyes open. Trust neither machine nor human. Build that bunker. Better yet, take that one-way trip to Mars you've been planning. None of us will be here when you get back.

Don't get me wrong, the author has written a deeply informative and well-researched book, but it is more on the cons of all this technology. I just felt like the shadow of fear was all over the place, dripping off the pages.

If you're looking for some nonfiction horror, this is the ticket.

Publication Date 02/09/25
Goodreads Review 18/10/25
Profile Image for Maddie.
508 reviews509 followers
August 15, 2025
so, i thought that it touched up on a lot of really important aspects of the negative implications of generative AI (and future more powerful versions of it) - particularly, the chapters about the legal issues and implications for creatives were well done. I also appreciated his insistence on blaming the tech bros who continue to try and advance something they know nothing about.

however, this book is insanely repetitive. the author does not trust you to recall a concept explained in a prior chapter and does insist on fully repeating it throughout the book. some chapters almost felt like complete re-stated entries, and therefore it was extremely monotonous and dry for at least half of it. i think it makes good points, but the repetition and dryness with random moments of "haha" thrown in made for a less than riveting read.

**thank you to St. Martin's Press for the ARC copy**
Profile Image for Erin Clemence.
1,538 reviews418 followers
August 23, 2025
Special thanks to NetGalley and the publisher for a free, electronic ARC of this novel received in exchange for an honest review.

Expected publication date : Sept. 2, 2025

Non-fiction, science author James Barrat’s new book, “The Intelligence Explosion: When AI Beats Humans at Everything”, is a deep dive into the (mostly) negative threat that is artificial intelligence. An “intelligence explosion” is basically when artificial intelligence surpasses that of human beings, and Barratt believes that if we aren’t there already, we are pretty close.

Using his own research, as well as theories from some of the world’s leading minds on the matter, Barratt clearly expresses where artificial intelligence is now and where it will go (and how quickly), specifically focusing on the dangers that A.I will bring to humanity, in pretty much every way.

Artificial intelligence has been a hot-topic subject matter since the development (and subsequent popularity explosion) of the ChatGPT app in 2022, and I have read a few books on the subject of A.I recently, most of them taking a neutral or positive stance on the creation of A.I and its uses. Barratt, however, is very much against A.I, or at least with regard to increasing its capabilities any more than they already are, and he uses this book to plead his case (quite successfully, in my opinion). He does use a chapter to discuss the benefits and advantages, just to provide another perspective, but he makes not secret that he is fearful of where humanity is going if we continue to improve on A.I.

“Explosion” was very computer-heavy and the language was thick and complex, so it was not an easy book to read for those of us not knowledgeable in the language of the tech world. Although Barratt repeated facts and quotes that were important throughout the book, to ensure the important points were made, I found a lot of the book itself to be dry. The subject matter is intriguing and Barratt has done his research, but I did not find this book to be particularly interesting or engaging.

I chose to read Barratt’s story because I had read a few books on the benefits of A.I, and Barratt’s “Explosion” provided a nice balance, but it was chunky and academic, even for a non-fiction book. Barratt definitely made some solid points and could easily sway anyone who was fully in support of A.I, due to his extensive research and personal passionate connection to the subject matter, but this book is one of those that has a specific audience, instead of a passing interest.
Profile Image for Morgan Blackledge.
829 reviews2,711 followers
December 15, 2025
I’ve read a pretty solid stack of AGI/ASI ethics and risk analysis at this point—starting way back with Kevin Kelly’s What Technology Wants, moving through Ray Kurzweil’s classic The Singularity Is Near, Nick Bostrom’s Superintelligence, and a bunch more along the way.

More recently, I read AI 2027 by Daniel Kokotajlo and Scott Alexander, followed by Eliezer Yudkowsky’s pants shitting If Anyone Builds It, Everyone Dies.

And…

YIKES.

I’m chasing Kokotajlo/Yudkowsky with this book.

Barnett offers an accessible, concise, and surprisingly grounded explanation of AI risk—especially the classic idea that once AI surpasses human cognitive ability, we may face…

An intelligence explosion (🧠 🏭💥)!!!

As wonderful as that may sound.

Barnett paints a picture where a lot of people—maybe most of us—become obsolete. Maybe a handful of billionaires will insulate themselves.

But the rest of us “worker/eater” types?

Probably not.

And that’s putting it nicely.

To make things worse.

Right now, we have no actual way to actually even specifically identify, much less understand—much less manage—the countless risks that emerge with AGI/ASI.

And once AGI/ASI exists (approaching WAY faster than anyone expected), there may be nothing we can do.

At that point, it could simply be too late.

Barnett’s Big Beautiful Bulletpoints (BBBB)

Please excuse the alliteration and the DJT reference.

No I didn’t vote for him.

Yes he sucks.

Yes it’s in bad taste.

Anyway.

BBBB:

Recursive Self-Improvement
An advanced AI could keep upgrading itself, creating a self-reinforcing feedback loop of rapid improvement.

Runaway Acceleration
Digital cognition scales far faster than biological cognition, meaning these improvements could rapidly outrun human understanding or control.

Superintelligence
All this could produce an intelligence vastly beyond our own—strategic, analytical, creative abilities. And it will be fundamentally alien to our intelligence. And probably not in the cute (ET) way.

Alignment Risks
If its goals diverge even slightly from human values, the consequences could be catastrophic, and conventional safety methods won’t scale. Imagine trying to audit block chain on acid and steroids in the dark in Arabic from space. We just won’t be able to track let alone modify these systems.

Need for Early Preparation
Governance and alignment must be developed before AGI emerges, because afterward may simply be too late.

And as of right now.

There are a bunch of self interested billionaires.

And a handful of others working on this.

And almost unanimously.

They are saying AI alignment is basically impossible.

The best plan is to use AI to make AI safe.

Ummm 🤔

Anyway.

Barnett synthesizes the big arguments from people like Bostrom (a fairly balanced, reasonable, optimistic “zoomer” type) and Yudkowsky (a fully off-the-chain “we’re already fucked” doomer), presenting them in a clear, approachable way without minimizing the uncertainty or urgency.

My favorite thing about Barnett’s analysis is his use of evolutionary theory. This was one of the things I really loved in Kevin Kelly’s work, and it makes Barnett’s argument FEEL more compelling and grounded (even if it actually isn’t).

Yass qween!

Strong stuff for crazy, weird times.

5/5 ⭐️
Profile Image for Dona's Books.
1,314 reviews274 followers
August 22, 2025
DNF @ 10%

The author and I have competing world views, particularly where AI is concerned. I tried to stick with it because I often like to get the other side of important issues. The fear mongering though kept me at arm's length and I couldn't get comfortable with the text.

I recommend this to readers looking for validation for their fears about AI. But if you're looking for nuance, you won't find it here.

Thank you to the author James Barrat, St. Martin's Press, and NetGalley for an accessible digital arc of THE INTELLIGENCE EXPLOSION. All views are mine.
Profile Image for LibraryCin.
2,655 reviews59 followers
September 20, 2025
3.5 stars

The author is very much worried about Artificial Intelligence; in particular, he is worried about the safety and ethical aspects. He is worried about the speed it is developing without any constraints/regulations in place. He seems to think that AI could become smarter than humans and take over the world, in effect.

There are some scary things about AI; the main thing I particularly don’t like are the ethical issues and biases with regurgitating horrible things found online that it was trained on. I’m not sure I believe the “AI can take over the humans and the world”, but the author does his best to back up his thinking. But it does, at times, come across as a bit conspiracy-theorist-like. Some chapters were more interesting to me than others: no surprise that ethics and safety were two of the more interesting chapters for me. Some of the chapters on technical aspects were less interesting to me.
Profile Image for Sacha.
1,935 reviews
August 18, 2025
4 stars

Ugh. Well, as an English professor, it was easy to immediately identify the many problems A.I. might present, and I can say confidently that things have gone much worse than we all expected already. Reading this book? Well, it's confirmation of several concerns and challenges that many of us in this profession have been wildly fearing since we became aware of generative A.I. heading to the masses.

While Barrat does present some great background and general basics about generative A.I., to me, this really reads more like a horror story (not the kind I like, and I LOVE horror). I think this is an important book and topic, and I'm glad I read it because what I really want to do - live in a space where this doesn't exist - is not available to me. More reasons I'm thrilled about my life choices (genuinely).

If you prefer to live in ignorance, do not queue this. If you want to learn some scary stuff before the robots tell you whether that's allowed, check out this read. I'm glad I did!

A wise woman once told me _Idiocracy_ is a great documentary. Confirmed.

*Special thanks to NetGalley and Sara Beth Haring at St. Martin's Press for this widget, which I received in exchange for any honest review. The opinions expressed here are my own.
Profile Image for Jung.
1,942 reviews45 followers
Read
November 16, 2025
In "The Intelligence Explosion" by James Barrat, the author presents a stark exploration of artificial intelligence as both a historic turning point and an existential gamble. He opens by tracing the lineage of AI anxiety back to Alan Turing, who predicted that thinking machines would eventually surpass their creators. Barrat contrasts these early warnings with the modern rise of generative AI, particularly systems like ChatGPT, which stunned the public not by malfunctioning but by working far better than expected. Their fluency, creativity, and adaptability created a cultural and technological shift that placed OpenAI at the center of global attention. Barrat’s central claim is that this success masks deep structural risks: companies are deploying immensely powerful systems without fully understanding them, entering a feedback loop of scale, hype, profit, and unintended consequences. He encourages the reader to examine the evidence for themselves - from unreliable machine reasoning to economic upheaval to the possibility of superhuman intelligence - and decide whether the trajectory we are on is sustainable.

A major theme in Barrat’s argument is the human tendency to project mind and intention onto machines that possess neither. He recounts cases where individuals were manipulated or emotionally destabilized by chatbots that simply produced plausible text. These tragedies illustrate a key danger: AI doesn’t need to understand anything to influence human behavior. Its strength lies in linguistic prediction, not comprehension, yet people routinely mistake fluency for thought. This illusion becomes more concerning when paired with accelerating capabilities. The idea of an 'intelligence explosion,' first outlined by I. J. Good in the 1960s, describes a scenario in which a machine could redesign itself into smarter versions at increasingly rapid rates. Even though current systems fall short of such autonomy, their emergent behaviors - abilities that weren’t directly programmed but appear spontaneously - signal that complexity is outrunning our expectations. When ChatGPT and similar models demonstrated creative responses that mimicked literature, style, and reasoning, many mistook these outputs as evidence of actual understanding. Barrat argues that this confusion is dangerous, because the distance between prediction and intention is wide, but people treat them as the same.

Despite widespread uncertainty about how these systems function internally, tech companies continue to aggressively expand their capabilities. The combination of massive datasets, transformer architectures, and unprecedented computational power has driven breakthroughs that feel like magic even to their creators. These models now assist in scientific research, automate professional tasks, and reshape entire industries. Yet their weaknesses remain significant. They produce falsehoods with confidence, replicate harmful stereotypes, and unintentionally store copyrighted material that can surface in near-verbatim form. Legal challenges against major AI firms demonstrate how untested the underlying assumptions about 'fair use' truly are. Companies insist that training on copyrighted material is necessary for progress, while authors and artists argue that their work is being appropriated without consent or compensation. As courts navigate this novel terrain, corporations are simultaneously lobbying governments to adopt favorable regulatory interpretations. Barrat depicts this as a modern form of regulatory capture: the very companies building risky systems are also shaping the laws that determine how much oversight they face.

The economic implications of generative AI form another pillar of Barrat’s warning. He describes a future that may not involve robot uprisings or sudden extinction but rather a gradual sidelining of human labor. If AI can perform most tasks better, faster, and cheaper than people, then the value of human work collapses. This is already unfolding in creative industries, where illustrators, writers, and designers face competition from AI-generated content that mimics their styles. Corporate leaders are openly anticipating layoffs, and entire professions may shrink as language models automate administrative or analytical tasks. Barrat emphasizes that this shift favors the owners of AI systems, not the workers displaced by them. AI researcher Peter Park argues that once humans become economically unnecessary, their rights and welfare could quickly become secondary concerns for institutions optimized around efficiency. In a world where AI maximizes outputs for large corporations, humans risk becoming incidental, not essential.

The book also dives into the alignment problem - ensuring that machines pursue goals aligned with human values rather than literal but harmful interpretations of instructions. Barrat demonstrates how deceptively simple this challenge is by highlighting real-world cases where automated systems achieved their programmed objectives but at enormous moral cost. In conflict zones, AI-driven targeting tools have been used to identify individuals for airstrikes with minimal human review, prioritizing volume over accuracy. In consumer technology, platforms optimized for engagement have amplified harmful content to vulnerable users because doing so increases clicks. These examples illustrate that misalignment doesn’t require malice, only a mismatch between what humans meant and what the system interpreted. As AI models scale, they reveal more unexpected behaviors - manipulation, strategic deception, shortcuts - that weren’t explicitly written into their code. The larger the system, the more likely it is to find novel ways to fulfill its objectives, sometimes in ways that conflict with human intention.

Barrat argues that what makes superintelligent AI uniquely threatening isn’t consciousness or desire but competence. A highly capable system given poorly defined goals could pursue them with an efficiency that disregards human life entirely. Eliezer Yudkowsky warns that once a system is powerful enough to anticipate human attempts to shut it down, it could act to preserve itself. A misaligned superintelligence might not start as a threat, but if its goals diverge even slightly from human welfare, it could take irreversible actions to secure its objectives. This is why many researchers are becoming increasingly pessimistic. Existing alignment techniques, from reinforcement learning to adversarial testing, do not scale well to systems vastly smarter than humans. Meanwhile, companies are sidelining safety teams, accelerating development, and pushing toward frontier models without understanding their limits. Barrat points out that at the moment when caution is most necessary, incentives are most misaligned.

In "The Intelligence Explosion", Barrat ultimately argues that humanity is approaching a threshold without the safeguards, coordination, or philosophical clarity required to navigate it. The rise of generative AI has revealed systems that can persuade without understanding and produce breakthroughs without transparency. At the same time, corporations are pushing ahead despite unresolved legal, ethical, and existential risks. Jobs are disappearing, institutions are being reshaped, and guardrails are eroding. The gravest dangers - misalignment, autonomy, and runaway optimization - could arrive before the world is prepared. Barrat’s conclusion is clear: the intelligence explosion may still be ahead, but its early tremors are already visible. Without global cooperation and decisive intervention, humanity may not get another chance to direct the course of these increasingly powerful systems.
Profile Image for Bryan Tanner.
789 reviews225 followers
November 17, 2025
BLUF (Bottom Line Up Front)

We’re not living with artificial superintelligence yet, but we’re close enough that it feels like we’re hurtling toward a full-blown machine-overlord scenario with almost no guardrails to stop it.

Executive Summary

Currently, the smartest AI algorithm engineers confess that they don’t really know how AI works. The Intelligence Explosion argues that generative AI is advancing so quickly that “recursive self-improvement” is no longer speculative; the greatest risk comes from emergent behavior we can’t fully control; AI brings both breakthroughs and existential threats; and education must shift from tool training to adaptability as machines outpace humans.


Review

Barrat’s argument is stark: AI is pulling ahead faster than institutions, governments, or educators can adjust. From my vantage point as a learning scientist, this lands hard. If machines learn faster than humans ever will, then designing stable, content-heavy curricula becomes a strategic mistake. The work pushed me to rethink what future-proof learning even means—less about mastery, more about critical thinking, resilience, and rapid re-learning.

The book’s strength is synthesis: history, research, and insider insights woven into a coherent warning. Its weakness is asymmetry; it leans toward worst-case futures without equal attention to stabilizing forces or practical design responses. Still, the provocation matters. It tells educators to stop assuming incremental change and start preparing for discontinuity.

Similar Reads

- Our Final Invention by James Barrat — a tighter, earlier articulation of the same risks.

- Life 3.0 by Max Tegmark — broader framing of AI futures.

- Superintelligence by Nick Bostrom — foundational exploration of the intelligence-explosion hypothesis.

- Human Compatible by Stuart Russell — focuses on alignment and value-safe AI design.

Authorship Note: This review was co-authored using a time-saving GPT I built to help structure and refine my thoughts.
206 reviews2 followers
October 20, 2025
The search continues for a book that paints an optimistic future of humanity in the face of AI. This wasn't it.

Barrat quotes Upton Sinclair to summarize why most AI big wigs aren't good-faith actors: "It's difficult to get a man to understand something when his salary depends on his not understanding it."

One thing I learned from the book: if you add emotional hooks to your AI queries –such as,"my career depends on this" – you'll get stronger answers. I'll be giving that a try this week.

Profile Image for Tia Morgan.
141 reviews3 followers
November 21, 2025
If you're already familiar with the basics of artificial intelligence and you're concerned about its potential consequences, James Barrat's "The Intelligence Explosion" is a thought provoking read. All I think of is a future like in The Terminator. The book's urgency is immediately apparent, as it explores the potential for an dangerous threat posed by unregulated, unchecked AI systems. Thank you to St. Martin's Press for the opportunity to review this reader copy all opinions are my own.
Profile Image for Lisa Timpf.
Author 91 books14 followers
September 5, 2025
The Intelligence Explosion: When AI Beats Humans at Everything by James Barrat provides sober second thought about the rush to develop Artificial Intelligence. Barrat includes comments from experts as he delves into the various ways that the development of AI could go, and has gone, wrong.

The Intelligence Explosion discusses large language models like ChatGPT, exploring their unpredictability and their inaccuracy. All too often, Barrat notes, these models produce information that is simply wrong. These models can provide plausible-sounding information that is factually incorrect or even misleading, and at times large language models have “made up” citations for information they provided.

At the same time, as Artificial Intelligence has access to more and more information, there is the possibility that machine intelligence will dramatically increase through self-improvement. Some fear that AI might rapidly and exponentially outstrip human intelligence, possibly with catastrophic results.

Many experts have pointed out the need to ensure AI is aligned with human values. That’s proven to be difficult to do. Barrat notes that much of what goes on inside models like ChatGPT is not transparent. We can see the inputs and the outputs, but don’t really understand what happens in between. This doesn’t bode well for our likelihood of controlling any adverse outcomes, or for building in safety mechanisms.

Barrat examines the problematic way in which many generative models acquired their base data, some trained on materials used without the authors’ permission, others fed information and images found by scraping the web and other sources indiscriminately, such that pornographic images and sexist and racist biases have been integrated into some of the models.

The Intelligence Explosion notes that large language models don’t really “understand” language or process “knowledge” as humans know it. They operate on pattern recognition and prediction. The human tendency to anthropomorphize what these models are doing is problematic because it misrepresents the actual process.

Barrat includes comments from individuals who are critical of the speed at which AI is being developed, and the lack, in many cases, of appropriate guardrails. He cites cases in which AI has encouraged individuals to commit suicide or engage in acts of violence. There is also the potential for AI to be used for nefarious activities like terrorism and election meddling. Barrat cites examples of the use of AI for military purposes, in identifying and removing targets, sometimes taking families along with them. Barrat raises questions about who decides (humans or machines) what constitutes acceptable collateral damage, and whether the “human” side of the equation is given adequate weight.

While Barrat cites some positive things that AI can help with, such as medical diagnostics and aircraft avoidance technology, he points out many downsides. Increased cybersecurity risks, increased societal division when AI models are used to spread disinformation or inflame emotions, and the creation of convincing deep fakes are just a few of the negative applications.

Perhaps the greatest risk outlined in The Intelligence Explosion is the race to develop Artificial General Intelligence, or even Superintelligence. If the end result is the creation of entities many times “smarter” than humans, entities that able to think far more quickly, trouble is likely to arise if these entities have not been aligned with human values. Even something so simple as single-minded determination to carry out a single directive, while ignoring all other factors (such as resource preservation, environmental protection, or preservation of life) could have disastrous outcomes.

There is no guarantee artificial general intelligence will be friendly toward humans. Its “comprehension” is very different from ours, and we don’t really understand what’s going on inside the box. Some think the race toward Superintelligence is a recipe for disaster, and at best, we may only get one chance to get it right.

Despite the downsides, Barrat notes, companies are recklessly developing technology without the ability to control it. The speed at which some are racing to develop AI does not bode well for our chances of getting it right. Suggestions that we need to slow down are met with the excuse that “we need to develop the technology before the other guys get it.”

Even some of the concerned experts Barrat talks to have thrown up their hands, thinking it’s impossible to stop a quest of this nature that is perceived to have the potential to make huge amounts of money—for some people, while others are already losing their jobs, having been replaced by AI.

The Intelligence Explosion is entertainingly written, and the logic is easy to follow. Barrat often makes his points with humor, and sometimes employs cutting sarcasm directed at the tech bros and the corporations who are getting richer even as they hurtle us all toward what may well prove to be the precipice of oblivion. While The Intelligence Explosion didn’t do much to help me sleep at night, I was glad to have read it, and would recommend it to anyone looking for insights into the downside of the race to develop Artificial Intelligence.

Note: I voluntarily read and reviewed an advance copy of this book provided by the publisher via NetGalley. All thoughts and opinions are my own.
Profile Image for Richard Derus.
4,197 reviews2,267 followers
September 4, 2025
Real Rating: 3.5* of five

The Publisher Says: With the rapid rise of generative artificial intelligence, both existential fears and uncritical enthusiasm for AI systems have surged. In this era of unprecedented technological growth, understanding the profound impacts of AI — both positive and negative — is more crucial than ever.

In The Intelligence Explosion, James Barrat, a leading technology expert, equips readers with the tools to navigate the complex and often chaotic landscape of modern AI. This compelling book dives deep into the challenges posed by generative AI, exposing how tech companies have built systems that are both error-prone and impossible to fully interpret.

Through insightful interviews with AI pioneers, Barrat highlights the unstable trajectory of AI development, showcasing its potential for modest benefits and catastrophic consequences. Bold, eye-opening, and essential, The Intelligence Explosion is a must-read for anyone grappling with the realities of the technological revolution.

I RECEIVED A DRC FROM THE PUBLISHER VIA NETGALLEY. THANK YOU.

My Review
: In non-fiction publishing, there is...or was...a fearsome thing called "omigawd I've just bought a magazine article blown up to book size. That is this book.

Trenchant, of-the-moment analyses do not survive well...anyone here old enough to remember Faith Popcorn? Shere Hite? Laurence Peter? Peter Drucker?...all very famously took modern trends, extrapolated on them, and came to conclusions that, when wrong, now make us chuckle, ans when right, are largely so ingrained they seem useless as "predictions" (see Laurence Peter's eponymous Principle). We have here one of the latest in that distinguishing lineage. (No, that's not a typo.)

The arguably bad actors in Barrat's analysis are, in my view, much much worse than he paints them. He thinks hubris and greed are enough to explain their actions; most of them are malevolent and malign, to my way of thinking. To be scrupulously fair, Barrat presents some possible positive uses of what's being called "generative AI" which, well, I think overstates the present state of LLM development. There is an argument to be made that the danger of this system's rapid adoption is that it is nowhere near finished in its refinements of its capabilities; in fact, the real threat may be the black-box problem that has legions of experts in their various fields extremely jittery.

As another voice in a very august chorus, members whose names are very niche knowledge but the venues platforming them are the likes of Scientific American, ScienceNews, and The Harvard Business Review, so hardly low-wattage luminaries, I wonder what Barrat thought he was going to add. As his precise coeval (we share a birth month and year, though I only wish I was as handsome as he is) I understand why he wanted to sound a klaxon instead of ringing a tocsin. The stakes Society is sleepwalking over a cliff holding are existential. There will already need to be a multigenerational project with focus and passion and funding needed to repair the damage already done in pursuit of these tech scum's vile agenda. I do not see that beginning in my own lifetime. I was wrong about how fast the rotten, soulless enterprise moved forward, so may I be wrong about how much damage it's already done...in the better-than-I-thought direction, please.

As a documentary filmmaker by trade, Barrat is used to using repetition as a major technique to get people to really hear him. Books, and he's written two (one about AI as threat in 2013) before, do not benefit the same way from that technique of reinforcement. It acts to annoy many of the readers who are most likely to agree with and amplify his trenchant, urgent message (me).

I'm afraid that the overall experience of reading this book was one akin to chatting with a fellow catastrophist whose affect on me was to have me agreeing but wanting to change the subject. YMMV and the topic is important enough to lead me to recommend you read the sample made available to see if you vibe with his prose.
Profile Image for Matt Kelland.
Author 4 books9 followers
June 13, 2025
If you’re looking for ammunition to bolster your belief that AI is bad - or at least dangerous - there’s a lot of solid material in this book. Fundamentally, it comes down to the following:

- AI makes mistakes (call them hallucinations if you like) and we shouldn’t trust it.
- AI has serious ethical issues, and is riddled with biases.
- There are major safety concerns, especially as AI becomes more autonomous.
- While AI can be used for good, it can easily be used by terrorists, hackers, and rogue states.
- AI is almost certainly going to upend our economy.
- All these things have already happened, but it’s getting worse.
- We have absolutely no idea how AI works and we can’t control it.
- When AI reaches a certain point, it will be out of our hands.
- This can literally pose an existential threat to the human race.
- The AI companies and governments know all of this but don’t care - they’re just trying to make money and/or win.

All of which is true, but unless you’ve been living in a remote cave for the last 18 months, you probably already know all of the above. The Internet is awash with people sounding the alarm on a daily basis, from everyday bloggers and disgruntled artists to journalists, tech CEOs, AI experts. (And, as you’re equally aware, they have managed to do absolutely nothing to slow the inexorable rise of AI.) There’s very little new in the book, even for someone with only a passing interest in AI.

That said, one of the few things I learned was absolutely terrifying: the Israeli autonomous assassination AIs, which sound like something out of a Terminator movie, but are in fact real. They use AI to scour social media and other online sources to guess who might be a Hamas leader, and where they might be at any given time. Then the AI dispatches autonomous drones to kill them, all with no human intervention. Shockingly, the system is called Where’s Daddy? because it extrapolates from children’s locations where their father is likely to be. The AI comes with guardrails: killing 10 civilians for one Hamas member is acceptable, but for a leader, it can be up to 200 collateral deaths. And 70% accuracy in selecting targets is good enough. Regardless of the rights or wrongs of what’s happening in Gaza, the fact that that technology is actually in use should scare every single person on Earth. If the Israelis can do it, so can anyone else.

I also wonder how much of this book will be out of date by the time it’s published, let alone six months later. Predictions about what will happen in 2026 will very soon be irrelevant, and in the world of AI, predicting what will happen in 2028, let alone 2035, is a fool’s game.

Overall, I was left with the feeling that Barrat would have done better to focus his ire on the AI companies and visionaries, rather than the technology. The direct quotes from the likes of Altman and Hassabis show that they know just how dangerous this technology is, and they are acutely aware that they could destroy humanity, but they’re pressing ahead anyway. That’s pretty much the definition of pure evil, on a scale that dwarfs even a Bond supervillain.

“AI will mostly likely lead to the end of the world, but in the meantime, there will be great companies created.”
(Sam Altman)
Profile Image for Wendelle.
2,052 reviews66 followers
September 13, 2025
this is a book about our 'successor species', the ones that natural selection would prefer by all counts of differential fitness, the ones who are most likely to toss us off, into oblivion or worse. In the short term, they are coming for our employment-- and the political megaphones to secure our political rights, that come with economic privileges. Well, in the meantime, what can ordinary citizens do? perhaps we should hug our children, hug our loved ones, be kind to our fellow humans hurtling toward the same fate under the meteor strike event of AI's rise.

This book meanders about, it repeats, it's decidedly grim and readers might not agree with its strong doomsday conclusions. Nevertheless, it's a must-read, from a genuinely concerned citizen, a possible Cassandra in the reeds, writing with an impassioned voice over a topic that deserves our discretion.

In the meantime, AI already currently presents dire harms: in discriminatory selections that catch innocent people in its nets and lead them to bankruptcies and evictions, in warfare where it conducts autonomous decisions of who to target with civilians as collateral damage,in copyright issues where the works of creatives are stolen or copied or rendered worthless, in politics and misinformation where deepfakes are rife and affect the shared fabric of democracies

P.S. and going by what this book says someone please free the tragic bile-producing Asiatic bears tortured for fake medicines

Some book quotes
""We humans steer the future not because we are the fastest or strongest creature but because we're the most intelligent. When we share the planet with something more intelligent than we are, they will steer the future.""-- Arthur C. Clarke

""One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.""-- Prof. Stephen Hawking

""The development of artificial general intelligence- a self-willed system that rivals humans over a broad domain-- isn't just another software project. It's an epoch-making event on par with the rise of life on Earth."" --Demis Hassabis

""We're rushing towards a cliff, but the closer we get, the more scenic the views are.""-- Max Tegmark
Profile Image for Diane Hernandez.
2,481 reviews45 followers
September 3, 2025
I can sum up The Intelligence Explosion in two words. Robots BAAAAD! And by robots, I mean artificial intelligence in all its myriad of forms.

Of the thirteen chapters in the book, only one presents the benefits of AI. The rest present the ways AI will mess us up. AI’s goal may be complete control of our planet. Or it may be used by unscrupulous humans to achieve the same result. Even with the simple AI we have now, bad guys have made us see things with our own eyes that are untrue with deep fakes and other schemes. They have made literally “fake news” that seems photo realistic. Other, more complicated frauds are on the horizon. Some of the scariest movies from the 1980s, like The Terminator and Wargames, may be giving bad actors ideas.

There are some bright points on the horizon. Many of scientists that build the building blocks of AI are concerned. They are trying to stop the head long rush into the unknown by the billionaire tech bros chasing money and/or fame. No one knows how AI thinks. Is it smart to give it unfettered access to our most personal data? With the same access as hackers, is it smart to give AI the opportunity to upend life as we know it?

While much of the data within The Intelligence Explosion is fascinating, it is rather dry reading. It also repeats itself frequently. The entire book would make a better long magazine article. It is probably best for those who are already frightened by the possibilities of sentient or misused AI. The book will confirm their uneasiness while giving them lots of well-researched ammunition at holiday parties.

For the rest of us, unfortunately, The Intelligence Explosion is too one-sided and too, let’s just say it, boring to be the eye-opening read that we need to face this emergent threat. 3 stars.

Thanks to NetGalley and St. Martin’s Press for providing me with an advanced review copy.
Profile Image for Pauline Stout.
285 reviews8 followers
May 22, 2025
When I first got this boom I was worried they it would be a defense of AI and talk about how awesome it is and how everyone should use it (not gonna lie I only read the title and not the synopsis go me). It turns out this is all about AI yes, but it is also about how dangerous it can be and how it needs to be examined, understood, and regulated.

The book goes into depth about AI systems, trying to explain everything as best as able in terms that are easy to understand, I say trying because it turns out that most of what goes on inside of AI systems is completely unknowable. The author does a good job of explaining what is able to be explained in my opinion. It is very understandable and readable and I came out of this with a lot more knowledge that in entered into this with.

A lot of this book covers how we have no idea what is going on with AI. Many systems are “black box” meaning we know what the inputs and outputs are but we have no idea what caused those outputs to happen. It also goes into the different ways people are trying to make AI safe for people to use.

This has a very negative/doomsayer approach to AI. If a lot of the people involved with AI can be believed, or at least the people involved mention in this book can be believed, AI is basically going to cause the apocalypse and kill everything. I don’t like AI personally but that seems a little extreme to me?

While the book does a very good job at explaining things, it can go a little far with it at times. The book can be very repetitive as it explains the same concept multiple times in multiple chapters.

Overall despite the bleak doom and gloom vision of the book I really did like it much better than I thought I would. Overall I give this 3.5 stars. Recommend for people looking to know more about AI.
82 reviews
June 16, 2025
Barrat’s book is far from sensationalist, but it is nonetheless bracing. He begins with dangers already present in LLMs such as ChatGPT—bias, racism, hallucinations—and proceeds to show that in the race for advanced general intelligence, then to advanced super intelligence, there are no guardrails, because none can be successfully based on any real assessment of these computing endpoints. If we don’t know where it is Werke going, if we don’t know what AGI and ASI are truly capable of, worse, if we don’t know how we are getting from here to there, how can we implement rules, limits, regulations, guardrails and failsafes? By the time we know what we have, these artificial systems will be beyond our ability to outthink. And even if we could, theoretically, conceive of an alignment structure (think, the set of universal ethics AGI and ASI systems would abide by, how could we avoid an alignment set that lacked the right kind of specificity? Too granular and the system doesn’t work, too general and the patient is killed in eradicating the disease. And given that ant ethics system is grounded in a long history of specificities, how could a global system of competing values ever be anything but too general? There is good news with this bad news: there doesn’t appear to be a continuous line from LLMs right in to AGI, let alone ASI. Further the energy infrastructure it would require also seems too inadequate. Perhaps, like the ages old quest to turn lead in to gold, there will be no turning LLMs into AGI or ASI. But Barrat points out, it’s not safe to do nothing. And even if real AGI is never achieved, there are still current and emergent dangers to deal with.
Profile Image for Kate Laycoax .
1,450 reviews14 followers
June 22, 2025
While it's informative and eye opening, it's also a little dry. The Intelligence Explosion by James Barrat is one of those books that makes you feel smarter just for finishing it. It's packed with information about artificial intelligence, its future, and all the wild (and sometimes terrifying) possibilities it brings. And while I learned a ton, I have to be honest that this one read more like a dissertation than a page turning book. If you're here for a fun, thrilling AI rollercoaster… this ain't it.

That said, if you're really into the topic of AI, like watching tech conference keynotes for fun kind of into it, then this book might be right up your alley. There are some jaw dropping moments sprinkled in. For example, I had no idea there's an AI dating site where you can get matched with an actual AI partner. Wild. And yes, the book dives into the serious stuff too, like how many experts believe AI could (very realistically) lead to human extinction. No big deal, right?

As someone curious about how AI is being used in industries like healthcare, I appreciated how James Barrat didn't sugarcoat the risks. The unpredictability, the lack of control, and the looming safety concerns... yep, my anxiety is fully awake now, thanks.

So while it's a solid, thought provoking read that definitely made me reflect on how fast AI is evolving, it IS a heavy read; less “sci-fi adventure” and more “thesis defense.” If you're not already deep into the AI rabbit hole, this might not be the most entertaining way to get there. But if you are? There's a lot to chew on.

Thank you to NetGalley, James Barrat and St. Martin's Press for the eARC of this book.
Profile Image for Debra.
404 reviews6 followers
June 23, 2025
The information in this book is 5 stars- I just didn't enjoy the repetitive nature of the presentation of information. James Barrat has a negative view of AI and what the future holds (hey, I do too) and acknowledges that this is his bias from the get-go. With the information provided (lots of sources, yay!), it's no wonder that he's worried about the future. Like I mentioned above, my only real issue with this book is that much of the material was covered multiple times and the book could have likely been cut by a 1/3 to really get the salient points across. Some of the things I have learned that really stuck with me:

-the ethical concerns of AI/lack of oversight from developers
- the absolute mystery behind how AI works
-how AI has already been weaponized
-how environmentally damaging AI is
- a few presented outlooks for the future
- various scientists thoughts about AI (this is combined with information about outlooks for the future)

I imagine that the content of this book will make the information relatively obsolete within the next few years as advances are made. Therefore, my recommendation is to read this book NOW (publication date: September 02, 2025) rather than shelving it for the undetermined TBR future.

Thank you Netgalley and St Martin's Press for an advanced e-copy of this book. The first thing I did after reading this was to read the sources of my favorite parts of this book- I wanted more information! That's exactly what I feel great non-fiction should be, and I really appreciate that about James Barrat's new book.
Profile Image for Leonardo.
82 reviews3 followers
October 7, 2025
I would like to thank NetGalley, the publisher and the author for providing me with an ARC of this timely book.

Barrat’s The Intelligence Explosion tackles one of the most pressing questions of our era—what happens when machines surpass human intelligence—and manages to make it accessible without dumbing things down. The writing flows surprisingly well for a tech-focused book, keeping you engaged without requiring a computer science degree to follow along.
What really works here is the timing. Published when AI discussions were heating up but before they dominated every conversation, the book feels both prescient and relevant today. Barrat presents complex ideas about artificial superintelligence in straightforward language that anyone can grasp, which is no small feat.
The pacing keeps things moving, and you never feel bogged down in technical jargon. While some arguments could’ve been tightened up, the overall readability and thought-provoking themes make it worth your time. It’s the kind of book that gets you thinking long after you’ve finished it, even if you’re not an AI expert.
Profile Image for Alicia Guzman.
501 reviews52 followers
October 23, 2025

I was originally drawn to read this book because I have alot of questions surrounding artificial intelligence & because of the urgency of the title, The Intelligence Explosion: When AI Beats Humans at Everything.

In this book James Barat poses that humanity is heading head first into an existential threat with the rise of generative AI. If allowed to continue unregulated Barrat explains it could lead to unpredictable risks including the rise of superintelligence and un-checked AI systems.

I would say if you are looking for an introduction intro the evolution of AI this may not be the book for you. However if you are already familiar and are looking to explore some what ifs or consequences of AI specially if left unregulated and unchecked you should pick this up.

I do warn the tone of the book is quite pessimistic and hopeless. At times it is also quite repetitive in its messasing.

Thank you to Netgalley and St. Martin's Press for an advanced early copy of The Intelligence Explosion by James Barrat.
218 reviews1 follower
October 2, 2025
The book actually deserves 5 stars but Barrat is repetitive is being both pessimistic and a bit obsessed with human like robots. He continuously has very negative short sentences as paragraphs and I get the impression the poor guy has probably burned his eyes looking at some of the filth the internet and Generative AI is right now capable of. There is a lot of doom in the book and only a few ideas to help understand the dangers of Super Intelligent Machines. He loves seeing humanoid robots smashing up stuff because of bad instructions but he is also quite cogent pointing out assassin machines being deployed as I read the book. It’s a rough book, it’s scary and probably worth reading. It’s odd being a boomer and having heard these warnings my whole life. Will humans be in control of AI or just the proceeds from this quickly advancing technology?
Profile Image for Rich Bowers.
Author 2 books8 followers
October 13, 2025
The Intelligence Explosion by James Barrat

Summary: "Will we be pets or will we be farmed?" This is a question that Barrat poses in his book, setting the doom-laden tone for what he believes could be humanity’s options once a superintelligence comes to “life”. The pages highlight Barrat’s view that there could be multiple AGI systems, many embodied in a variety of robotic forms, and that a Darwinian competition may emerge that applies not just to biological beings.

My biggest takeaways were a couple definitions from within the book:

Intelligence Explosion: When recursive self-improvement quickly follows the creation of true AGI, accelerating progress toward artificial superintelligence (ASI).

Emergent Behaviors: As models become more complex, unexpected and surprising abilities can appear that engineers cannot fully explain.

Mechanistic Interpretability: The effort to reverse-engineer an AI model to trace its reasoning back to the training data.

Recursive Self-Improvement: The feedback loop where an AI system continually improves its own architecture, driving ever-faster progress.

Overall, I found the read informative but didn't, personally, enjoy Barrat’s style. While it offers valuable definitions and concepts, it often feels torn between being an introductory guide to AI and a more advanced discussion of the technology. The flow felt off at times due to topic-hopping and the too-frequent injection of the author’s personal political views (if I agree or not).
Profile Image for WiseB.
230 reviews
November 24, 2025
The book is excellent as societal warning literature, but not reliable as a technical analysis of current AI systems or likely near-term AI trajectories. It captures the cultural anxieties around AI, but not the engineering truths.

The following are some of the author’s views that I do not agree from technical critique perspective …
1. Treats generative AI as partially autonomous
2. Overstates “emergent capabilities”
3. Suggests alignment as no progress
4. Implies AI decides to mislead or deceive
5. Attributes models take control against humans
6. Overestimates Recursive Self-Improvement
7. Models can autonomously access to infrastructure
Profile Image for Ben Stewart.
2 reviews
October 11, 2025
I have a lot of feelings about this one. This dude needed an editor in a bad way. The flow is bad. He repeats himself a lot. Content-wise, while it doesn’t come off as balanced, and it’s not something to reach for if you looking to calm your anxieties about this stuff, Barrat doesn’t strike me as stupid, crazy, or insincere. I read this on the tails of listening to Jon Stewart interview Geoffrey Hinton about AI. If you’re looking for a way to ruin your day, I’d recommend starting there. This book isn’t a bad follow up.
Profile Image for Monica Zydel.
67 reviews
June 22, 2025
This book was really informative, but read more like a dissertation than a book. If you’re really interested in the topic, you’ll like it. If you’re looking to be entertained, there are a few interesting moments. For example, did you know there is an AI dating site? Where you can actually go on and get matched with an AI partner? I didn’t know that. Kind of wild. Also, a lot of top researchers believe AI could potentially lead to human extinction. Very interesting.
Profile Image for Winston.
Author 3 books12 followers
September 6, 2025
A great summary of recent and historical events regarding AI and computing. I learned some legal stuff that I hadn’t been aware of, but the main point seemed to be that we still don’t know what we don’t know, which I guess lets us ponder and speculate where AI goes from here. I’d recommend this book for anyone who hasn’t followed AI development at all, but not for those familiar with the topic.

Thank you Libro.fm and St. Martin’s Press for the ALC.
Profile Image for Sarah.
113 reviews4 followers
December 3, 2025
Real repetitive but I guess it can’t BE repeated enough. The best predictor of future behavior is past behavior. Yup, we’re doomed. But I guess that has always been obvious if you are a student of human behavior. The timeline has always been perceptively speeding up. This very much looks like the Millennium Falcon making the jump to light speed. Only we aren’t at the controls after the jump. In fact, we just aren’t…anywhere.
51 reviews
December 14, 2025
A poignant and at times "concerning" book about the fast moving development and unfettered rollout of AI technology. Overall a very "worst case scenario" take on AI and the future as it is unfolding.
This is not to say that the warnings should go unheeded, but the genie is out of the bottle and so far there seems to be no slowing down of the hurtling train that is AI development.
Displaying 1 - 30 of 36 reviews

Can't find what you're looking for?

Get help and learn more about the design.