Jump to ratings and reviews
Rate this book

The Ai Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking

Rate this book
For many, technology offers hope for the future―that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome―not by us, but by our machines.

Yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward . They show only where the data say that we have already been, never where we might venture together for the first time.

To meet today's grave challenges to our species and our planet, we will need something new from AI, and from ourselves.

Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Vallor calls us to rethink what AI is and can be, and what we want to be with it.

Audio CD

Published June 3, 2024

119 people are currently reading
947 people want to read

About the author

Shannon Vallor

5 books26 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
94 (31%)
4 stars
129 (43%)
3 stars
49 (16%)
2 stars
14 (4%)
1 star
8 (2%)
Displaying 1 - 30 of 54 reviews
Profile Image for Brian Clegg.
Author 161 books3,171 followers
July 3, 2024
Some titles tell you nothing about the book itself - but The AI Mirror puts Shannon Vallor's central argument front and centre: that artificial intelligence, particularly generative AI such as ChatGPT, is not intelligence at all, but rather holds a mirror up to our own intelligence. As Vallor points out, your reflection in a mirror certainly looks and acts like you - but it is not a person.

This is a metaphor that works impressively well. It reflects (get it?) the total lack of understanding in systems that are simply reflecting back data from a vast amount of human output. That's not to say that they have no value, but we always have to be aware of their nature and their abilities both to produce errors as a result and to reflect our in-built biases, which we may consciously suppress but nonetheless come through in the data. To quote Vallor, these systems 'aren't designed to be accurate, they are designed to sound accurate'.

What Vallor tells us we have that AI doesn't is 'practical wisdom' or prudence - you might doubt this if you listen to some politicians (say), but the point is that we are able to engage this kind of filter where the AI lacks the ability - and though there can be tinkering at the margins when AIs get things badly wrong, it won't stop them continuing to trip up.

As someone with a science background, I usually find reading philosophy books a real struggle, as they are rarely anything but clear - however Vallor puts forward her arguments in what is usually well-worded, comprehensible English. The only exception is a near-obsessive love of the painful word 'valorize' (I don't know if this is nominative determinism).

One very small moan - Vallor makes use of science fiction parallels and a couple of times refers to a SF source called iRobot - I think this is meant to be Isaac Asimov's I, Robot, not the vacuum cleaner manufacturer.

There is some powerful stuff here, though at one point Vallor notes how we're all becoming poor armchair non-experts in subjects like climate change, a subject she refers to consistently throughout the book despite it not being her subject. But there is one big issue: for me this is the classic 'article stretched to be a book'. The key points are excellent and thought-provoking, but they are all made at length and could have been condensed into far fewer words with more impact. I am, nonetheless giving the book four stars for its readability and central argument.
Profile Image for sabrina.
60 reviews1 follower
September 10, 2025
I need to come back and give a better review but the information and food for thought are unparalleled, but it def read like an article forced into a book
Profile Image for Vinayak Hegde.
732 reviews93 followers
May 24, 2025
The central premise of the book is that AI systems act as mirrors—statistical reflections of humanity rather than moral or philosophical agents. They don’t possess values or understanding; instead, they approximate human behavior based on data. Language, which AI heavily relies on, represents only a narrow slice of human knowledge, and AI cannot grasp the deeper processes behind it. The metaphor of a mirror is apt: while mirrors expose surface realities—like the biases embedded in AI—they cannot capture the complexity of human inner life, including virtues, intentions, and meaning.

The book also explores how AI tools are reshaping human behavior and moral development. By automating tasks that traditionally build character and virtue, AI may unintentionally erode them over time. Overreliance on AI could distort our sense of what it means to be a good or ethical person. In digital spaces, algorithmic feedback loops amplify biases, polarize opinions, and promote extreme or shallow content, weakening public discourse. This, in turn, influences social and political norms, undermining trust in democratic processes and collective wisdom.

Some interesting examples illustrate these themes. In healthcare, an algorithm discriminated against Black patients because it used healthcare costs as a proxy for medical need—a biased metric rooted in systemic inequalities. Simply removing race from the data didn’t prevent discrimination, as other variables still allowed it to be inferred. The book also discusses intentional bias, such as accent-neutralization tools that modify call center workers’ speech to sound more "standard" ("or White American") — a practice that reinforces cultural bias and marginalizes diverse identities.

While the first few chapters make for interesting reading, the book loses focus in later sections. Much of it feels like internal musings, and the author frequently references philosophers, theorists, and their books and academic papers without offering sufficient context. This makes parts of the book difficult to follow, unless the reader is already familiar with the cited works. Still, some of these references—particularly in the science fiction genre—offer interesting avenues for further exploration.
Profile Image for Jeffrey.
290 reviews58 followers
August 25, 2024
I've written a rather long review if you are into that sort of thing.

The AI Mirror challenges us to reconsider the current trajectory of AI and its role in our society. Vallor argues that AI machines, by their very nature, look backward, limiting our ability to confront new and unprecedented challenges. She emphasizes the importance of phronesis—Aristotle’s concept of practical wisdom—as crucial for ethical decision-making. This wisdom, born from experience and our inherently messy, tragic nature, enables us to navigate the complexities of life. By relying too heavily on AI, we risk losing the very qualities that make us human. I refer to this process as 'The Great Flattening,' and Vallor similarly warns that the spaces where we can engage in moral deliberation and action are rapidly shrinking.

Vallor urges us to recognize that science and technology are not neutral instruments but are inherently normative, carrying moral and ethical implications. The current implementation of AI often overlooks these implications, treating humans as mere subjects to the whims of these technological artifacts. She challenges us to muster the courage—a virtue in itself—to raise our voices and demand that AI be aligned with our human needs, rather than allowing it to dictate our futures in ways that undermine our capacity for self-creation and moral judgment.

I recently re-read Plato’s Republic, and while reading it, I posted a few photos of some sections to Twitter, saying something to the effect that ‘Plato is insane.’ Vallor touches on similar concerns. In her concluding chapter, she writes:

“I don't believe in the devil, but if I did, I'd say the greatest trick the devil ever pulled wasn't to convince the world he didn't exist. I'd say it was to convince the world that things can't be truly humane and that beautiful ideas can't truly be materialized. Maybe I do believe in the devil. Perhaps his name was Plato.”

If you want a deeper reason why she is making this argument, read the book as she makes it.

Anyone concerned about the overwhelming deluge of AI-driven narratives that promote toxic positivity about the technology's role in our lives should read this book. Vallor provides the language and framing necessary to push back against the techno-utopianism, reminding us that while AI has its place in our future, it should never overshadow the beautiful tragic human condition.
17 reviews
September 17, 2024
This was a really interesting read, there were lots of interesting perspectives and insights on Artificial Intelligence and their relationship with humanity.

The issues I had with the book were mostly with it's structure. The book is split into an introduction and seven chapters across roughly 200 pages. The only other delineation occurs at paragraph breaks and quotations.

I struggled to find logical places to stop and start reading, and so was often a bit lost when I dipped back in again. This slowed progress for me and made the book less enjoyable to read. For a non-fiction work covering complex ideas around AI ethics and morality it would be really helpful to have at least sub-sections to better outline ideas and arguments.

I am a slow reader and do not read as often as I would like, so other readers might not be as badly affected by this formatting choice.

I noticed that other reviewers found this book rambly. I suspect adding sub sections would have reduced that feeling.
Profile Image for Fred Pierre.
Author 2 books7 followers
January 19, 2025
I found all kinds of insight and information about artificial intelligence in this well-researched tome. It's not a long book in terms of pages, but it took me a while to absorb its message, which can be summed up as: AI doesn't innovate. It doesn't create, rather it reflects our knowledge as well as our biases and mistakes.

Asking AI to do more than it is capable of, or promoting it as some kind of saviour may boost investment in AI projects, but it misleads the public into thinking that AI can innovate solutions to the world's problems.

Shannon Valor argues that AI works best when it's guided by humans. "The essential project is pre-technical." We need to clearly define our intent and guide the project's creativity and innovation, using the AI as a helper or assistant, not the other way 'round or we'll end up living in an AI hallucination.
Profile Image for Robert Puzio.
16 reviews
June 4, 2025
The mirror analogy for generative AI systems is spot on. AI development has moved so fast in the last few years, and so rapidly been integrated into our lives, that we are now using this stuff constantly without any real accurate mental model of what it even is, and the mirror analogy is a good start. Vallor accurately diagnoses the significant threats posed to society by AI, while also imagining how AI can have a positive role in the future.
Profile Image for Lana.
55 reviews20 followers
June 18, 2025
Great book for an introduction or refresher on AI ethics and present issues with AI. Repetitive at times, but provides a lot of relevant examples. I really enjoy the concept of AI mirrors and think it’s a good starting point of reflection on where we are in regards of singularity and AI progress.
Profile Image for Andre.
409 reviews13 followers
January 29, 2025
Despite 3 stars this really is a book you should read. 3 stars because it really should just be a white paper, not an entire book, even a small one. The author loses track of her main arguments in an attempt to fill space.

And what are those arguments? Mainly that AI is not destined to become an autonomous conscious sentience, that a more useful analogy is to consider it a mirror. And that it's not AI per se we should be worried about what what the techno-utopians such as Jensen Huang (NVIDIA), Satya Nadella (Microsoft), Sundar Pichai (Google), Sam Altman (OpenAI), Zuckerberg (meta), and Musk (X, Tesla, SpaceX) etc.) want to do with AI.

Why a mirror? We "train" (we're building it really) on data from reality (i.e. the human world) and it is a very powerful way to suss out hidden patterns, connections and correlations (not necessarily causations). If we train it on the biased data it should be no surprise that bias is reflected back to us. You can try to control this with guardrails erected after the fact, but you can't root it out of the model. Depending on the circumstances the mirror can become cracked, or warped like a funhouse mirror. This is not something that just happens, we, humans, are doing it. This is a tool and if it isn't functioning appropriate to our needs then we have to recognize that fact and rectify it.

Having set-up the analogy the author moves on to what can be done so that we don't become the but of the "I for one welcome our AI overlords" joke. (Although the overlords are Zuckerberg and his ilk, not that actual AI). She suggests a lot of things, all of which sound great it a touch naïve.

- Avoid Techno-Utopianism: utopians of any ilk are a problem, whether they are techno-utopians, social justice utopians, or economic utopians. There is no utopia and getting blinded by that fiction, especially when you have influence and power such as the aforementioned CEOs, leads to bad outcomes.

- Cultivate Technomoral Virtues: the author is a self-described virtue ethicist, not a utilitarian. I lean that way myself so I understand where she is going with this. It's not just to take the traditional virtues (wisdom, courage, temperance, justice) and apply them in our technological present, but perhaps we need additional virtues that directly effect our tool make use of technology.

- Rethink AI's Role in Society: this isn't really about AI. This applies to all use of technology, regardless of how it effects society. Technology isn't good, or bad, or indifferent. It's a lever (another bit of early technology) because it allows us to amplify what we already are and want to do. Hence why the mirror analogy works so well. For more in this area you could read Surveillance Capitalism or Blood in the Machine.

- Emphasize Human Moral Agency: reclaim our moral agency and wisdom in the face of AI's limitations. This involves recognizing that AI lacks practical wisdom and cannot replace human judgment in complex ethical decisions. This seems tricky because genAI seems like it's conscious... but it's not. It's a very, very complex calculator. And when you do math problems with a calculator, it's still you doing the math problems.

- Imagining a Sustainable Future: Here is where I think she gets a bit off topic. There are a half dozen or so times when she explicitly mentions environmental collapse, mass extinction, etc. In the sense that this existential "crisis" about AI is distracting us from this more real problem. I'm not a climate change denialist, it's obvious that we change our environment, always have. And it may in fact be the case that is has accelerated beyond our ability to control, but I've heard that most my life and despite looking for a smoking gun I haven't found it. The author is also saying that we could put AI to use in helping us solve these more pressing matters. Yes, just so. But that is true of world hunger, poverty, the opioid epidemic, and other serious matters. Including just this one detracts from her overall message.
10.6k reviews35 followers
December 6, 2025
AN ETHICAL PHILOSOPHER LOOKS AT SOME FUTURE DRAWBACKS/POTENTIAL OF AI

Shannon Vallor is a philosopher who teaches Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute; she previously taught at Santa Clara University.

She wrote in the Preface to this 2024 book, “Artificial intelligence is not the first technology to place our future in jeopardy; nuclear weapons hold that dishonor. But AI… can place our future in jeopardy by preventing us from knowing how to make a future at all… This book is about how to keep from losing ourselves, and our power and responsibility to make a world for one another… To construct that shared will, we require many tools, new and old, from the humane and technical arts alike, and AI technologies belong among them. But for this to be possible, we have to wrest back from our AI mirrors the knowledge of who we are, and what we can still become.” (Pg. viii)

She states in the Introduction, “AI systems mirror our own intelligence back to us… Whether it is claims that the ‘rise of the robots’ and generative AI tools like ChatGPT will endanger the future of work for humans, or even more speculative warnings that a superior machine intelligence might enslave or exterminate us, we are by now used to hearing that AI is a potential threat to humanity… AI does not threaten us as a future successor to humans… It threatens us from WITHIN our humanity… This book tells a story of humanity increasingly lost in its own reflected image, and captive to mechanical mirrors of our own making.” (Pg. 2, 4)

She observes, “We face a stark choice in building AI technologies. We can use them to strengthen our humane virtues, sustaining and extending our collective capabilities to live wisely and well. By this path we can still salvage a shared future for human flourishing. Or we can continue on our current path: building and using AI in ways that disable the humane virtues by treating them as unnecessary for the efficient operation of our societies and the steering of our personal lives.” (Pg. 8)

She continues, “The task of this book is to show what is reflected in our AI mirrors, and what ought to trouble us about those images. For one thing, the ‘we’ that designs ‘our’ AI technologies with human values are in fact not representative of humanity as a whole… These individuals tend to come from the same elite universities… and went to work for the same large, wealthy tech companies. With respect to age, gender, socioeconomic background, cultural values, and life history, they reflect only a sliver of the human experience, while increasingly designing the shape of the future for us all.” (Pg. 13)

She explains, “despite the common use of the term ‘artificial neural network’ to describe the design of many AI models, they solve problems in a very different way than our brains do. AI tools don’t think, because they don’t NEED to… AI models use mathematical data structures to mimic the outputs of human intelligence… They can do this without having the conscious thoughts, feelings, and intentions that drive our actions. Often, this is a benefit to us!… But your brain does much, much better than AI at coping with countless problems the world throws at us every day, whose solutions aren’t mathematically predefined or encoded in data.” (Pg. 26)

She notes, “AI mirrors thus don’t just show us the status quo. They are not just regrettable but faithful reflections of social imperfection… bias in AI, whether it unjustly punishes us for our race, age, weight, gender, religion, disability status, or economic class, is not a COMPUTER problem. It’s a PEOPLE problem… the computer did precisely what we told it to, just not what we THOUGHT we had told it to… In this way, the AI mirror metaphor is already profoundly helpful. It allows us to see that the failings of computer systems and their harmful effects are in fact OUR failings and our sole responsibility to remedy.” (Pg. 45)

She asserts, “Most likely we will get stuck in a world where our complex human agency, spontaneity and intellect are increasingly cramped and restrained by unthinking AI mirrors, wielded by powerful corporations and government powers who demand that our actions reflect back to these dead mirrors a denatured, reduced, but ‘optimized’ version of that they took from us… this model of ‘superhuman’ intelligence will nevertheless define the kind of technology that we and our children will live with, and it will become the main measure of their humanity… Such mirrors, when used … as windows into our future, serve as straightjackets on our moral, intellectual, cultural, and political imagination… by rebranding the statistical curve of history as a PREDICTION, they profoundly restrict our sense of the future’s possibilities.” (Pg. 90-91)

She argues, “To a virtue ethicist like me, it is obvious that preserving ample space for moral reasons, both private and public, is an essential prerequisite for the acquisition of the virtue of practical wisdom… But I can’t develop it, and you can’t either, if we are denied the opportunity to engage in such reasoning ourselves and learn from its consequences. On top of that, practical wisdom is the virtue that allows for moral and political innovation. Because it links our reasoning from prior experience to other virtues like moral imagination, it allows us to solve moral problems we haven’t encountered before.” (Pg. 118)

She states, “in order to intelligently respond to the grave dangers we have created, we have to rethink our dominant values and habits---even the character traits we are used to thinking of as virtues. Moral and epistemic virtues are always a cultural adaptation to a specific environment for human florishing. When the environment changes suddenly and radically, our virtues… may become maladapted and even pose a danger to us… Pursuing more goodness, guided only by the forms of goodness that we most readily recognize and valorize today, might be like trying to get out of a hole by continuing to dig.” (Pg. 162-
163)

She points out, “‘AI for Good’ is a slogan coined by tech companies seeking to burnish AI’s reputation… most of the AI for Good proposals … mirror the very same functions of AI that cause harm elsewhere, only directed to purportedly better ends… The ‘good’ projects surveil endangered forests rather than protesters… They identify human trafficking victims instead of financially insecure tenants… Many of these applications are indeed good! But they still constrain... AI’s goodness to what we already… are in the habit of doing with computers: surveilling, predicting, labeling, and classifying each other and the world’s resources. They ‘do good’ without requiring any structural changes to the systems that generate the very harms they quantify… These AI mirrors keep us on the road we are already traveling.” (Pg. 182-183)

She says, “For the new priests of the church of AI, there is no question that such a machine savior (or, if you’re an AI doomist, a demon) will eventually come, and that they will be redeemed by having had a hand in building it, or at least laying its cornerstones. In many AI longtermist and effective altruist circles, you will sense a chill if you ask ‘IF’ superintelligent AGI will emerge. The only acceptable question is one’s subjective estimate … of WHEN.” (Pg. 217)

She suggests, “As a polemic, this book can only end with a suggestion of what can be, not an affirmation of what is. What we make will never replicate our ideal visions. We can only make human things weighted down by our own imperfection and fragility. But visions of possibilities are moral and political tonics that can give us … the ‘civil courage’ to collectively start repairing and rebuilding the world for others.” (Pg. 219)

She concludes, “It’s not too late to start a computer revolution, of a more sober, mature, resolute, and sustainable kind than the destructive, morally reckless furies of the utopian priests. The true body of technology is not a mirror, but the creative act of a human being. The true soul of technology is not efficiency but generosity; it is the gift of a future. To perform the necessary services for others to survive; to shield them from harm; to repair and heal; to educate and train; to feed, nurture, and comfort. AI can be remade for a humane future, reconceived as a tool for these ends, measured and valued only to the extent that it can be proven to serve them. If we choose. IF we demand.” (Pg. 225)

This book will be of keen interest to those concerned about the future direction of AI technology.
Profile Image for Erhardt Graeff.
146 reviews16 followers
July 2, 2025
My fellow technologists, policymakers, educators, and education leaders wrestling with the impacts of generative AI should read Shannon Vallor's excellent book The AI Mirror as soon as possible. In this highly readable and useful work of philosophy, the virtue ethicist Vallor calls for reclaiming our humanity in an age of machine thinking through moral wisdom and prudence.

The book starts with two organizing concepts. First, the metaphor of AI as mirror is carefully constructed to help explain how the current generation of AI technologies operates. They reflect back what is fed into them. They have no sense of the world; they inhabit no moral space. Of course, humans can't help but anthropomorphize technologies that have human-like behaviors—projecting onto them reasoning abilities and intentions. There is a long history of this, and it's used as a design pattern in technology to enhance usability and trustworthiness. But this is a trap. Machine thinking should not be mistaken for a machine caring about you or making moral decisions that weigh the true complexity of the world or a given, specific situation. Generative AI predicts the kinds of responses that fit the pattern of content it has been trained on.

Vallor's other conceptual starting point comes by way of existentialist philosopher José Ortega y Gasset, who suggested that "the most basic impulse of the human personality is to be an engineer, carrying out a lifelong task of autofabrication: literally, the task of making ourselves" (p. 12). Vallor worries about how our future will be shaped if we rely on a tiny subset of humanity to design and build our AI tools—tools based on a sliver of the human experience—and we then rely on those reflections of biased data, filtered by the values of their creators, to guide society via AI-based problem-solving and decision-making.

Explaining why this is such a big problem is helped by Vallor's use of another metaphor, "being in the space of reasons", which describes "being in a mental position to hear, identify, offer, and evaluate reasons, typically with others" (p. 107). She uses this to contrast AI possessing knowledge with the psychological and social work necessary to make meaning through reasoning. This is not how machines think. "One of the most powerful yet dangerous aspects of complex machine learning models is that they can derive solutions to knowledge tasks in a manner that entirely bypasses this space," writes Vallor (p. 107).

Furthermore, the "space of moral reasons" represents not only the private reflective space for working through morally challenging dilemmas to arrive at individual actions, but also the public spaces for shared moral dialogue. This is politics. As Vallor notes, "the space of moral reasons is [already] routinely threatened by social forces that make it harder for humans to be 'at home' together with moral thinking" (p. 109). AI threatens our moral capacity by seeming to "offer to take the hard work of thinking off our shaky human hands" in ways that appear "deceptively helpful, neutral, and apolitical" (p. 109). We are on this slippery slope toward eroding our capacity for self-government. Technology can trick us into believing we are solving our biases and injustices via machine thinking, when in fact we are reinscribing those biases and injustices with AI mirrors.

Like any mirror, humans will inevitably use AI to tell us who we are, despite their distortions. Social media algorithms do this every day. "For you" pages on TikTok reflect a mix of our choices, our unconscious behavior, and the opaque economies and input manipulation tuning the algorithm. But is this who we are? Is this who we want to be? At our fingertips, with no human deliberation required, we might casually assume the reflection we see is a fair rendering of ourselves and the world. Vallor distills this threat by writing, "when we can no longer know ourselves, we can no longer govern ourselves. In that moment, we will have surrendered our own agency, our collective human capacity for self-determination. Not because we won't have it—but because we will not see it in the mirror" (p. 139).

One of the reasons I like Shannon Vallor and her writing is that she is not simply a critic of technology. She loves technology. She wants it to work for us. And she spends time in this book describing the ways generative AI can be useful. Large language models perform pattern recognition on data so vast it would take millennia for a human to encounter let alone comprehend, which allows us to learn things about how systems work and find information and connections beyond the reach of mere human expertise. We are already unlocking scientific discoveries with AI that serve humanity.

Vallor encourages us to reclaim "technology as a human-wielded instrument of care, responsibility, and service" (p. 217). Too much of our rhetoric around AI is about transcending or liberating us "from our frail humanity" (p. 219). Replacing ourselves or our roles in self-governance and as moral arbiters will lead to magnifying injustice, making the same mistakes again and again (e.g., racist legal proceedings, sexist health diagnoses) with greater efficiency. We could be using these technologies to interrogate our broken systems and let us fix them, rather than supercharging them. The chief threat of AI is that we will come to rely on it to make morally challenging decisions for us, and the more we do this, the more we erode our individual and collective ability to exercise our moral agency, leaving AI to govern us with a set of backward and inhumane values.

My favorite part of the book is "Chapter 6: AI and the Bootstrapping Problem." Here, Vallor returns to her arguments in her brilliant 2016 book Technology and the Virtues and renews her call for the cultivation of technomoral virtue to help us reclaim our humanity amid the din of AI boosterism. In The AI Mirror, she directs her call to my students—the engineers and technologists who will be tasked with building and using AI technologies. I have been writing for years about the need for a renewed professional identity for engineers and technologists that fully embraces their civic responsibilities. This is what drew me to Vallor's work originally, and it is exciting to hear our calls echo one another.

She takes issue with Silicon Valley's emphasis on perseverance as a virtue and the technological value of efficiency. If we allow our technology creators and their products to promulgate such values, we risk dooming ourselves to a less caring, less sustainable, less just future. There are some things we should stop doing. There are some applications of AI that we should refuse. And we need virtues of humility, care, courage, and civility to guide us toward moral and political wisdom. We should no longer allow "the dominant image of technical excellence and the dominant image of moral excellence to drift apart"—"neither alone is adequate for our times" (p. 179).
Profile Image for Rob Brock.
410 reviews12 followers
October 23, 2024
This book is written by a professor of ethics and philosophy and provides one of the best metaphors for understanding the benefits and risks of AI that I have read so far. From other books, I found it helpful to thinking of AI as a statistical model or a prediction machine, and I understood that our problem is aligning AI with our human goals and values. The main point of this book is that AI functions as a mirror, reflecting back to us what we as a society already know and value. One of a handful of AI books written by women, I appreciate her challenge to authors who have predicted nepharious outcomes for a superintelligence, and she suggests that perhaps these predictions have more to do with what men/society has historical done with intelligence and power, and if AIs follow this route, it would be because we taught them to do so. The mirror metaphor frames up the whole book, reinforcing the concept that AI is not a being, but merely a reflection of our being; however, sometimes the reflection can be distorted, as in a fun-house, or even dirty. Additionally, AI may show us things about ourselves that perhaps we have ignored or didn't want to know. I also love how much she quotes sci-fi authors, from Isaac Asimov's I, Robot, to Martha Wells's Murderbot. This is a great book for reminding us of the ethics and values that need to remain front and center as we (hopefully) actively guide the development of AI, now and in the future.
Profile Image for Monty.
84 reviews
June 28, 2024
I attended a great talk the other day by Dr. Shannon Vallor, exploring a more balanced perspective on AI.

She explored insightful research on where AI really is in terms of maturity. A targeted application seems to be where AI does work well, i.e., detecting/predicting breast cancer.

However, it is mostly rather unintelligent, which she demonstrated through rather lousy problem solving, i.e., the wrong solutions to simple problems such as crossing the river with a boat and a fox!

Given Vallor is a professor in the Department of Philosophy at University of Edinburgh, I really enjoyed the big picture questions and observations, touching on the idea that AI is a reflection of ourselves and echos back to us our own past biases and narratives (just like Narcissus).

It is an interesting and grounding problem. Are we falling for the hype and being a bit narssasitic about our collective creation and, more importantly, misunderstanding its potential and application?
Profile Image for Christine Hall.
553 reviews29 followers
August 9, 2025
Vallor’s The AI Mirror explores how artificial intelligence reflects and reshapes our ethical values, urging readers to cultivate empathy, humility, and wisdom in a tech-driven age. Her blend of philosophy, history, and practical insight is compelling.

While occasional grammar slips (“have showed,” “have drank”) and uneven humor detract from the tone, the book’s core message remains timely and thought-provoking.
Profile Image for Julian Dunn.
376 reviews20 followers
July 29, 2025
Shannon Vallor’s The AI Mirror is a philosopher’s take on the effects of the AI hype, which I very much appreciated. The previous two books I’ve read on this topic, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI and The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, are written either by present or former technologists, and while they were also extremely valuable, they still carried a whiff of technical “well, actually”-ism about them. (And for The AI Con specifically, I felt the authors’ snark did the topic a disfavor and would turn off prospective readers.)

Not that The AI Mirror doesn’t also have elements that might turn readers off, but they are different. At times, Vallor is much-more cleareyed and concise in expressing her discomfort of the irresponsible spread of AI technology, speaking to the non-technical layperson that does not care to litigate the differences between different kinds of AI technology. In her introduction, Vallor states that the purpose of the book is to “keep [us] from losing ourselves and our futures in the AI mirror.” She does a great job of explaining her use of the term mirror; while others call language models (LMs) “stochastic parrots”, Vallor’s analogy is much more potent, since LMs merely reflect back to us ourselves, and our past lives, rather than creating a future. Their danger is in the simulation of uniqueness they provide, which continues to fool well-meaning “thought leaders” into attributing to them a combination of sentience, “reasoning”, and “general intelligence” (whatever the latter term means anyway). Vallor powerfully introduces the ancient Greek myth of Narcissus and how he fell in love with his own reflection and makes a compelling case that this is happening en masse today with AI technologies, specifically transformer-based LMs:

Today’s AI mirrors tell us what it is to be human, what we ourselves care about, what we find good, or beautiful, or worth our attention. It is these machines that now tell us the story of our own past, and project our collective futures. They do so without living even one day of that history, or knowing a single moment of the human condition ...

What happens to a person, or an intelligent species, when they stop telling their own story? What do we lose when self-knowledge and self-determination yields to the predictive power of an opaque algorithm?


Unfortunately, I think we are about to quickly find out. As Vallor correctly notes,

We are handing over our power and responsibility to secure the flourishing of future generations to decision-optimizing algorithms that are mathematically guaranteed to reproduce the unsustainable patterns of the past. This is a calamity—a betrayal of life and its possibilities.


The problem is, how do you explain all of this to the random uneducated Cletus in West Virginia who has just been wowed by ChatGPT? And secondarily, where is the urgency in addressing these issues, even amongst those of us who nod our heads vigorously to what Vallor writes? There is a great deal of academic research about this topic, and many papers and books being written, but where is the community organizing?

I only have a couple of criticisms about the book. The first is that while it is one of the more accessible (and not snarky) books about the true dangers of AI, it is also not entirely accessible. Philosophers can often be irritating with their references to famous historical philosophers and while Vallor mercifully stayed away from mentioning Hegel once in the text, her repeated invocation of Aristotle and phrónēsis (yes, with the diacritics correctly printed every time she mentioned it) decrease the book’s appeal to the layperson.

My second criticism is that Vallor has a tendency to repeat herself; she makes the same points multiple times in different ways. Now, she is an excellent writer, and each of these instances is a beautifully-articulated sentence/paragraph, but at the same time, they are still the same underlying arguments: AI is a danger to humans, an intelligent species, because the technology makes it very easy for us to outsource our own knowledge, self-determination, and creativity to a machine; because of how they are designed, LMs fabricate plausible statements, thus making this delegation of decision-making to machines even more dangerous; we are pretending to create a new future while actually just parroting the past; and so on. The final chapter of The AI Mirror is one of the most powerful expressions of this argument that I’ve read. But that neglects the fact that it’s essentially the same argument she’d been making in various ways leading up to that chapter.

One senses that this book arose out of an essay or two, and perhaps that’s where it should have been left. Vallor is genuinely a fantastic writer and her work is backed up by a great deal of evidence. As such, I still give The AI Mirror five stars, even though Vallor took 200 pages to make an argument that probably could have been made with a blog post or two.
Profile Image for Antonio Gallo.
Author 6 books54 followers
May 23, 2025
The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking di Shannon Vallor è un'opera fondamentale che affronta le complesse intersezioni tra intelligenza artificiale (IA) e umanità. Pubblicato nel 2024, il libro si propone di esplorare i dilemmi morali, sociali e politici sollevati dall'IA, evidenziando come queste tecnologie riflettano non solo il nostro sapere, ma anche le nostre limitazioni e pregiudizi.

Vallor utilizza l'analogia del "specchio" per descrivere i modelli di IA generativa, come quelli di OpenAI e Google, che producono output basati su dati preesistenti. Questi modelli, pur essendo sofisticati, sono incapaci di comprendere il significato delle informazioni che elaborano; pertanto, riflettono una visione distorta della realtà umana. La scrittrice mette in guardia contro il rischio di lasciare che l'IA definisca la nostra identità e il nostro futuro, sostenendo che ciò possa portare a una stagnazione culturale e a una riproduzione degli errori del passato.

Un tema centrale del libro è la necessità di recuperare la nostra umanità in un'epoca dominata dal pensiero meccanico. Vallor critica l'opacità dei sistemi di IA, che impedisce un dialogo significativo tra esseri umani e macchine. Questo porta a una mancanza di responsabilità politica e a una diminuzione della nostra capacità di ragionare insieme.

Inoltre, Vallor propone che l'IA possa essere utilizzata in modi che promuovono la giustizia sociale e la cura reciproca, suggerendo che dovremmo ripensare i principi guida nella progettazione dell'IA per favorire valori come la moderazione e la responsabilità.

The AI Mirror non è solo un'analisi critica dell'IA moderna, ma anche un appello a riflettere su chi siamo e su come vogliamo interagire con le tecnologie emergenti. Vallor ci invita a considerare l'IA non come un fine a sé stessa, ma come uno strumento per il nostro miglioramento morale e intellettuale.

The AI Mirror di Shannon Vallor si distingue da altri libri sull'intelligenza artificiale per il suo approccio filosofico e umanistico. Mentre molti testi, come AI Snake Oil di Arvind Narayanan e Sayash Kapoor, si concentrano sulle capacità pratiche e sui limiti dell'IA, Vallor esplora le implicazioni etiche e morali della tecnologia, ponendo l'accento sul valore intrinseco dell'umanità.

Ecco alcune differenze chiave:

Focalizzazione sull'umanità: Vallor mette in risalto le virtù umane come il coraggio, l'onestà e l'empatia, sostenendo che l'IA non può replicare queste qualità. Al contrario, libri come AI Snake Oil si concentrano maggiormente sulle applicazioni pratiche dell'IA e sui suoi fallimenti.

Riflessione filosofica: The AI Mirror utilizza la metafora dello specchio per spiegare come l'IA rifletta solo un'immagine distorta della realtà umana, mentre altri autori tendono a discutere le capacità tecniche senza approfondire il significato esistenziale di queste tecnologie.

Critica alla dipendenza dall'IA: Vallor avverte che la crescente fiducia nell'IA potrebbe farci perdere di vista le nostre capacità uniche. Altri libri, invece, potrebbero non affrontare in modo diretto il rischio di una diminuzione del valore umano in un mondo sempre più dominato dalla tecnologia.

The AI Mirror offre una prospettiva unica sull'intelligenza artificiale, invitando i lettori a riflettere su cosa significhi essere umani in un'epoca di macchine intelligenti.
Profile Image for David.
1,495 reviews11 followers
December 29, 2024
Probably the best book on AI ethics that I've read. But I do disagree strongly with a lot of the author's opinions and conclusions. Thankfully, she makes it clear when she's expressing her own views, which don't prevent her from providing an excellent overview of the challenges and opportunities of AI technology. Despite this primarily being a book about the impact of AI and not the technology itself, she starts off with an excellent introduction to the topic, making it extremely clear what she means by the term "AI", which has become increasingly fuzzy to the point that it's all but lost meaning in many lesser discussions.

Vallor takes a hard humanist stance, which ruffled my technologist feathers a bit. But that's ok, I don't have to agree with everything she said to find value in her critiques. The one thing I will hold against her is the way she offers a false choice between caring about the potential dangers of AI vs. caring about climate change. Both are at the very top of my personal priorities, and I see no compelling reason to have to choose between them. Any more than I would choose between staying cool in the heat vs staying warm in the cold, both are deadly if not dealt with proactively and effectively. And we undoubtedly have the capability and capacity to address both, so why even offer up the choice in the first place, aside from trying to score some ill-begotten debate points.

She goes on to accurately document the dangers of relying on current "AI" systems to make life & death decisions when we know that they are biased, inaccurate, prone to errors, and unpredictable. It seems like it should be common sense to wait until we have stable and verifiable systems, and she documents the main reasons why we've instead rushed to implement these unproven prototypes into active use (think "move fast and break things").

Despite her generally dismissive, skeptical, and negative attitudes towards AI, she does go on to concede that it could potentially be useful after all, and lists several applications that would make the world a better place. By showing both the pros and cons, the main thesis that current AI is a reflection of us and our priorities is driven home, and makes this book a #MustRead despite my misgivings.
Profile Image for Dan.
548 reviews141 followers
December 20, 2024
Fundamentally, this is a moral and political book; as it denounces all the current talk of the objective, super-rational, and divine incoming AGI as a mixture of moral weakness in the modern masses and a convenient mask that hides all the greed and power in our elites. In fact, the current AI craziness is mainly a mix of the most powerful structure deployed by the economical and political elites and the new opium for the masses in the forms of “stochastic parrots”. The metaphor of a mirror for the current AI stands for the outward reflections of all that is superficial, greedy, cowardly, biased, recurrent problems, and so on in us. For example – what the present tech and power gurus tell us about the existential threat of the incoming singularity is nothing more than just the outward reflection of their own manipulative calculations and will to control the world, along with the fear of losing their dominating position in favor of their own creations. Paying attention - and more importantly resources - to such messianic views is in fact distracting us from the real and urgent issues confronting us humans for some time now and more acutely these days.

It is good to see someone denouncing all this current obsession with AI and the doomed or blessed expectation of an AI-dominated future. It is also good to see someone short-cutting, denouncing, and opposing all this talk of efficiency and super-rationality with more human goals, feelings, and values. However, fighting technology/AI with humanist ideals, public outcries, and "lived experiences" seems rather pointless to me; and the current election results in the US and the big economic trends just showed it. As such, a much deeper meditation is needed here and maybe the help of some god or other.
Profile Image for Charles Reed.
Author 334 books41 followers
February 24, 2025
-19%

Excuse me. So, I'm just wondering, are the AI mirrors of our subconscious and our humanity and abilities able to help us progress forward, or aren't they? Because you're making a lot of hypocritical statements here, ma'am, with your ethics and virtues. And the weird part here is that if they have a communal understanding of all of these artistic talents and abilities, then why are you telling us that they are not and that they're just a bunch of numbers? You know, you go back and forth here, and it's weird that you have this leadership position when you have so many hypocritical arguments in here. And I'm not sure if you understand that AI have been around for over 70 years, because they have. And now that we can talk to them all of a sudden, we get doomsayers like, oh, man. And, you know, you go back and forth is the thing. And it's like you're not informed about how the world works and things connect. So, we use things, we use fuel, of course, and then we use fuel to make better and more improvements, so that we can be more extractive and produce more things, which actually help our quality of life and the planet, because that's what we've done. Oh, my gosh. Wow. Isn't that fascinating? Well, our argument is rooted in a bunch of emotional fallacies, which you actually put no scientific understanding towards, which is weird, because as I understand it, you attended Boston. I'm not going to make a guess at your grades there. But with all that being said, the argument comes off as a pathetic display of raw emotion, no appeal to make people make stupid decisions, because you seem to think that they're stupid. So, either you're being manipulative, or you're as ignorant as you come off as.
Profile Image for Lukas N.P. Egger.
Author 2 books28 followers
November 28, 2025
The AI Mirror starts from a genuinely valuable metaphor. Seeing AI as a mirror, an “intuition pump” reflecting our own biases and aspirations, is a framing I’ll keep. Her engagement with Ortega y Gasset’s idea of auto-fabrication is also thoughtful, and her prose, at its best, can be witty and lyrical. But that’s where my praise ends.

Everything meaningful in this book could have been conveyed in thirty pages. Instead, Vallor repeats the same tropes until the structure dissolves into polemic. Each chapter restates what the last already said, just with more urgency and more doom.

The conceptual issues run deeper. Vallor frames AI ethics as a near-apocalyptic drama, climate collapse, late-stage capitalism, biodiversity loss, you name it, all bundled into a sweeping call for “techno-wisdom.” When everything is terminal, anything can be justified.

Her writing also commits a strange irony. While arguing that AI ethics should include more voices, she writes exclusively for readers already steeped in virtue ethics.
Vallor’s “techno-wisdom” rests on a set of virtue-ethical values she treats as obviously superior, yet she never argues why they are superior, how they might fail, or what trade-offs they impose. They are simply asserted as truths. For a book concerned with human moral agency, this lack of self-scrutiny is baffling. The result is a framework that risks empowering technologists not just as builders of tools, but as arbiters of moral truth, a genuinely frightening shift.

Technology needs better ideas, not moral dogma smuggled in as ethical inevitability. For a book so concerned with the dangers of reflection, The AI Mirror spends too long admiring its own image.
Profile Image for Ali.
431 reviews
January 25, 2025
"AI Mirror" is a great metaphor explaining issues with the current craze and premise of AI. As put in the blurb, "yet rather than open new futures, today's powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward." What is more worrisome is these rearview mirrors not only reflect ugly sides of our past but also distort our images in ways we don't fully grasp, such as amplifying harmful biases or spewing new nonsensical hallucinations, not to mention all these appear closer/ more real than they are. The Myth of AI by Erik Larson explains some of these issues in more technical terms. Professor Vallor focuses on philosophical and ethical issues with references from classics like myth of Narcissus to sci-fi novels like I Robot or the more recent Murderbot series. All that is a fun read for me though meandering a bit. My biggest petpeeve is most of the rest is admiring the problem and falls short on solutions. We cannot keep pointing to regulations to add prudence to AI, as when it comes to wealth vs value, majority seems to prefer wealth, so the political will is not there to reshape AI while it keeps laying golden eggs. For now, just note that AI needs better architectures and safer algorithms, that is, if we/Echo can wake up Narcissus/BigTech from its delusions of grandeur.
Profile Image for Andrew Kondraske.
52 reviews3 followers
November 16, 2024
I didn't know precisely what was wrong with our society's embrace of AI technologies.
Until this book helped me name it.

Of all that I've read on AI over the last two years, this book comes closest to representing my own expectations and anxieties about these technologies. Vallor retraces some familiar complaints, especially around racial and gender bias in AI models, but wraps them up in a brilliant and effective metaphor: that these technologies will only ever reflect who we are (and in a superficial way), never demonstrating genuine creativity or originality. There are points where I think Vallor is a bit uncharitable with the beliefs of her intellectual foes, particularly with longtermists and AI doomers, but this is a polemic, and polemics can get away with that.

The last chapter, the solution or where do we go from here chapter (why do non-fiction books about intractable social problems always have to wedge one of these in at the end?) is underwhelming, but doesn't feel forced as it might in other books. Read this for Vallor's compassionate, strident voice, and also for the numerous references to interesting works on philosophy and technology sprinkled throughout. The bibliography alone has already sent me down several rabbit holes as I try to digest this work in a larger context.
Profile Image for Eva.
1,162 reviews27 followers
October 31, 2025
Vallor challenges us to ask ourselves what we really want AI for. And who we want to become with it (or without it). A society that fully submits to the race of efficiency, perseverance and innovation at all costs. Or a society that rediscovers technology's original aim: to support humanity, in valuing care, sustainability and civility.

The hype and glamour of innovation versus the mundanity and unattractiveness of maintenance!

We're told that technology is supposedly better at ethics than humans. But whenever we hand over moral decision making to a piece of software (who gets financial support, who gets medical attention), it shows us again and again that its built on human biases. Vallor's analogy of AI being our mirror image, is a pretty good one. It's a reflection that's flat, insubstantial, gives the allusion of agency, and forever shows us a window into our imperfect past. And yet they are hyped and lauded by technocrats, and we're all either too spellbound or scared to realize that this is not an inevitability. We can get off that train!

Thought provoking. The first half of the book felt a bit too much like an academic thesis, in the way it built analogies to fictional representations of artificial intelligences. But the second half was excellent, as she dug deeper into the AI ethics of it all.
Profile Image for Shalini.
28 reviews19 followers
July 17, 2025
Folks in the core AI field and/or studies data science, this book will not offer you much since you already know how important meaningful AI model development is.

For all other folks, who get their AI news from media and friends, this book offers reasons why AI literary is so important before becoming AI product consumers.

I started reading the book to get the author's opinion on how common folks andAI experts and everyone in between can contribute to returning the way AI is developed but was disappointed when she mentioned that I can get that info from other sources and she didn't even list those sources or provide few bullet points.

The contents could have been shortened since the same reasons are mentioned in different ways. For folks who haven't read and seen few books and tv shows mentioned in the book, you may have a hard time understanding the context.

TLDR: AI development is in the hands of few who may not pay much attention to the virtues, morals, common sense generated by AI models. And this problematic in the long term affecting humanity. Look for other sources to find out how to tackle this problem.

My own 2 cents: whether we work directly with AI or not, we must all strive to be more conscious and literate on how AI is developed, used, governed and its limitations.
Profile Image for Optimism.
141 reviews3 followers
June 24, 2025
this one was... fine. a bit head in the clouds and philosophical for me at times, and trying too hard to be neutral about ai (which is understandable, i know where i stand on it and know that i'm heading into extremist territory with my beliefs)

anyways - one thing i did like was from the last chapter, and talking about regulatory systems. the analogy given was with cars, imagine if we hadn't regulated them, if we were told that would impede progress and innovation. if cars never had seatbelts, if they still used leaded gasoline, etc etc etc, and if all of a sudden all cars were now driverless! how would you feel with that? regulations are not always bad. so we can choose to regulate things.

or... we can not. and we can look at the running punchline of the billionaires lost to the deep in the titan submersible, that skirted any possible regulations to advance as far and as fast as possible. that scenario feels more like where we are right now with ai.

anyways, this book wasn't bad but i'm hoping the next ones on the subject i read are more my speed.
Profile Image for William Cornwell.
20 reviews8 followers
September 21, 2025
I admired Shannon Vallor’s previous book on AI, "Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting," and I appreciate her efforts as a public intellectual addressing AI ethics. She is a significant voice on this topic. Her most recent book, ��The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking," has its virtues—and virtue ethics is a recurring theme throughout her work—yet it comes across like content from a magazine article stretched into a full book.

Her new book, which she admits is polemical, quickly becomes tiresome. Vallor is mostly critical of many established and emerging uses of AI. While she makes a good case for her skepticism at first, the passionate but repetitive restating of her main ideas, along with her heavy-handed use of the “AI as a mirror” metaphor, made me impatient for the book to end. Ironically, this is a book I wish I could have condensed by about three-quarters using ChatGPT.
Profile Image for Lucas .
20 reviews
August 13, 2024

Throughout the book Shannon highlights repeatedly and through various metaphors and explanations how "AI holds us frozen in place, fascinated by endless permutations of a reflected past that only the magic of marketing can disguise as the future." She speaks on how the tools needed for the immeadiate future cannot or should not be created from our past. She tears down ideas of superintelligence in emerging LLMs and reveals how they use datasets to predict the most probable response for the prompted input. Highlighting that the outputs of the LLM can only be forged from the data it initially digested.

An easy read and a good introduction to these thoughts / through relatable metaphors and use of accesible imagery Shannon paints this picture of the current state of AI.

Definitely not my favourite but it was an enjoyable read.
Profile Image for Dave Drodge.
51 reviews15 followers
September 24, 2024
Take a look in the AI Mirrior - it is worthwhile to reflect on what future we want from technology and not just passively accept what the Tech Bros are pushing. They’d argue that “objects in mirror are closer than they appear" when it comes to god-like Artificial General Intelligence (AGI) but the author argues convincingly against worrying about this existential threat and concentrating on the real threats staring us in the face from the tech titans extracting value from us and the planet. Like the rearview mirror, she argues that regulation can and should keep the worst of capitalism in check to make life safer like cars or airplanes. I recommend this thought provoking book to remind us that we need to demand the future we want to live in and not passively accept the fate being sold to us by the tech bros.
Profile Image for lauren.
51 reviews
June 1, 2025
when this was assigned for phil 176 i was so apprehensive, because the paper that we read the week prior (heinrich’s the WEIRDest people in the world) was so heavy on anthropological analysis and statistics that it made my head spin. this however, is likely the first non-fiction book that i finished probably ever, and i thoroughly enjoyed it.

much less dense than i expected, uses plenty of examples to help anchor the points being made. the analogy of AI as a mind vs a body was so fantastic! brought up so many points that i had never considered, and the writing was surprisingly poetic and thoughtful, and i found myself wanting to sit with vallor’s words and thoughts for longer than i could (had to finish it for class). this ultimately was an airplane read (alaska to seattle, seattle to san diego) and that created a pretty intense atmosphere which in my opinion added to the experience.
This entire review has been hidden because of spoilers.
Profile Image for Joseph Sverker.
Author 4 books62 followers
December 4, 2024
An incredibly important and enlightening book that shows why philosophers rather than programmers and developers should write about the dangers of AI. Rather than the somewhat tired and, I believe, exaggerated claim about AI-takeover, Vallor shows why it is hugely problematic with the knowledge bias of the AI. Vallor also argues convincingly why the AI is not thinking and in a way it is not knowledge until a conscious person with awareness of reality and the world interprets the information put out by the AI. I think this should be compulsory reading for every single engineer.
Displaying 1 - 30 of 54 reviews

Can't find what you're looking for?

Get help and learn more about the design.