Fears about AI tell us more about capitalism today than technology in the future.
Will AI come and take all our jobs? Will it dominate humanity, hack the foundations of our civilization, or even wipe humans of the face of the planet? All kinds of people seem to think so. From academics to billionaires, artists to fraudsters, journalists to the pope, AI nightmares have gripped the popular imagination. Why We Fear AI boldly asserts these fears are actually about capitalism, reimagined as a kind of autonomous intelligent agent.
Science and tech industry insiders Hagen Blix and Ingeborg Glimmer dive into the dark, twisted, and arcane world of AI nightmares in order to demystify what people say about it. They combine expertise in cognitive science and machine learning with political and economic analyses to cut through the hype and technobabble, and show how fears about AI reflect very different economic from venture capitalists to AI engineers, from artists to warehouse workers at Amazon. If we want to understand the fears and potential impacts of AI, we must think about capitalism, the economy, and class power in real terms we can confront and wage our struggles on.
Blix and Glimmer argue that AI nightmares reveal the terrifying underbelly of our current society, of the violence and alienation at the root of capitalism and its way of organizing our world in its image. If we simply let capitalism and tech billionaires run wild, we can expect the automated bureaucracies that protect the powerful and punish the poor; an ever-expanding surveillance apparatus; the cheapening of skills, downward pressures on wages, the expansion of gig-work, and crushing inequality. But that outcome is not inevitable, however much capitalist may dream of it. Why We Fear AI points the way to a different and brighter future, one where our labor, knowledge, and technologies serves us, rather than us serving capital and its owners.
The somewhat hyperbolically-titled new book "Why We Fear AI: On the Interpretation of Nightmares" (by Hagen Blix and Ingeborg Glimmer, 2025) examines the discourse about AI systems and Large Language Models (LLMs), focusing on the narratives of fear and existential annihilation that are espoused all along the spectrum, even by those who are actively developing the technologies themselves.
Blix and Glimmer offer a critical examination and interpretation of the current state of AI hype that works to position AI/LLM technology historically, politically, and economically within the neoliberal tradition to "remake all relations between people and things into market relations,, and relations of private property."
Fundamentally, the authors posit that LLMs exist as a way to "industrialize the production of language" and "enable new value to be extracted from the raw resource that is the information of our daily lives."
"Why We Fear AI" contains some insightful critical analysis of the current hype-filled discourses around AI technologies, but suffers from some questionable analogies, digressions, and a lack of focus and organization in the presentation of its critique. To be fair to the authors, though, there is a lot to be concerned about in this space, and the core of their historical and economic arguments are interesting and worthy of engagement and reflection. These technologies are not neutral "long text summarizing machines" – they are a manifestation of a set of hyper-growth-fixated ideologies that are contributing to a fundamentally unsustainable economic model.
In Part I, the authors describe five ways in which discourses about AI follow familiar playbooks from previous iterations of industrialization, automation, and resource extraction: the incomprehensibility of the complex models; the invisibilization of labor (or its devaluing); the privatization or enclosure of the public space (referring to AI's co-option of information into proprietary language models); the creation of a sort of AI bureaucracy (AI as a soulless, amoral, heartless entity that is both insulated from responsibility and insulates its masters from responsibility); and the inevitability of AI – the claim that "there is no alternative," which is also at the core of discourses about capitalism in general. It's important to note that this analysis works best when applied to how AI is being discursively positioned within our culture, not necessarily how AI actually operates. This is a distinction that the authors themselves tend to collapse at times throughout this section.
In Part II, the book moves more directly onto the "fear" topic as the authors trace the uses and deployments of AI systems in warfare, policing, border control, surveillance, and workplace and industrial oversight. While there are eye-opening insights and cautionary tales throughout this section, I found it disorganized and prone to digressions about automation and technology more broadly. I understand that the authors' goal is to draw lines from older forms of industrialization to today's AI tools, but stylistically I felt it was difficult to follow the jumps in the timeline and the massive variety of examples ranging from the Spinning Jenny to the steam engine, to Amazon's warehouses, back to CNC machines, and over to a discussion of Taylorism, and then back to that one unfortunate "Crush" ad for the iPad that everyone hated on last year. All of this context and these examples are real and relevant; they are just presented in a somewhat chaotic and overwhelming manner.
The book concludes with perhaps its strongest section on AI proponents' fundamentally flawed and dangerous conception of "intelligence" with its roots in eugenics and social Darwinism with the filters of profit and growth placed over them. Both the "fear" of AI and the "daydreams" about AI as savior are functions of a discourse that devalues human intelligence and presents problems caused by our economic system to be "unsolvable" without some kind of intervention of superintelligence. "We don't need 'superintelligence' to know what's causing climate change," they write. "We already know. We are failing to stop climate change because capitalism stands in the way, not because we lack intelligence."
While not a particularly accessible book, and not without its issues of coherence, "Why We Fear AI" contributes a much-needed perspective to the cacophony of voices about AI technology and how it's supposed to change the world. Fundamentally, it reminds us to question narratives about the inevitability of AI and to be suspicious of the supposed neutrality or objectivity of the technology.
This is fantastic and a must-read for those alive in 2025. Spoiler: the issue is AI in the context of neoliberal capitalism. Maybe we could imagine an utopian vision of AI for the common good, freeing humans from drudgery and toil, but that's obviously not the world we're going to get. Even if it lives up to anywhere close to the hype, AI is just a continuation of technology owned by capital to accelerate inequality and obfuscate the oppression of everyone else.
pretty solid, made my way here via the ep of this machine kills where blix discusses this topic. definitely refreshing in a sea of ai "critiques" that miss the point at best and are often comically out of touch (respectfully!!!!). i read this on the tail of matteo pasquinelli's "the eye of the master," which was a nice companion. published two years apart (eye of the master in 2023, this book in 2025), it's interesting to see the throughlines that connect these books in a fairly fast-moving sphere of technology/politics/etc