Jump to ratings and reviews
Rate this book

Taming Silicon Valley: How We Can Ensure That AI Works for Us

Rate this book
How Big Tech is taking advantage of us, how AI is making it worse, and how we can create a thriving, AI-positive world.On balance, will AI help humanity or harm it? AI could revolutionize science, medicine, and technology, and deliver us a world of abundance and better health. Or it could be a disaster, leading to the downfall of democracy, or even our extinction. In Taming Silicon Valley, Gary Marcus, one of the most trusted voices in AI, explains that we still have a choice. And that the decisions we make now about AI will shape our next century. In this short but powerful manifesto, Marcus explains how Big Tech is taking advantage of us, how AI could make things much worse, and, most importantly, what we can do to safeguard our democracy, our society, and our future.Marcus explains the potential—and potential risks—of AI in the clearest possible terms and how Big Tech has effectively captured policymakers. He begins by laying out what is lacking in current AI, what the greatest risks of AI are, and how Big Tech has been playing both the public and the government, before digging into why the US government has thus far been ineffective at reining in Big Tech. He then offers real tools for readers, including eight suggestions for what a coherent AI policy should like—from data rights to layered AI oversight to meaningful tax reform—and closes with how ordinary citizens can push for what is so desperately needed.Taming Silicon Valley is both a primer on how AI has gotten to its problematic present state and a book of activism in the tradition of Abbie Hoffman’s Steal This Book and Thomas Paine’s Common Sense. It is a deeply important book for our perilous historical moment that every concerned citizen must read.

240 pages, Paperback

Published September 17, 2024

63 people are currently reading
2147 people want to read

About the author

Gary F. Marcus

15 books208 followers
Gary Marcus is an award-wining Professor of Psychology at New York University and director of the NYU Center for Child Language. He has written three books about the origins and nature of the human mind, including Kluge (2008, Houghton Mifflin/Faber), and The Birth of the Mind (Basic Books, 2004, translated into 6 languages). He is also the editor of The Norton Psychology Reader, and the author of numerous science publications in leading journals, such as Science, Nature, Cognition, and Psychological Science. He is also the editor of the Norton Psychology Reader and has frequently written articles for the general public, in forums such as Wired, Discover, The Wall Street Journal, and the New York Times.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
58 (32%)
4 stars
69 (38%)
3 stars
38 (21%)
2 stars
11 (6%)
1 star
2 (1%)
Displaying 1 - 30 of 30 reviews
Profile Image for Ali.
438 reviews
November 1, 2025
An easy read for the general audience. However, if you've been tracking the opposing views of the AI boomers vs doomers there is nothing new here, neither in terms of problems nor proposed solutions. Regulating emerging technologies has always been a challenge and appropriate guardrails are derived way too late after much collateral damage; even then many measures have unintended consequences.
Profile Image for Matt Berkowitz.
92 reviews63 followers
October 8, 2024
Taming Silicon Valley is an excellent book that splits the difference between AI doomers (those who are concerned about existential risk from AI) and techno-utopians who seem to shrug off any risks from AI. Though not a perfect book, Marcus is a very clear, articulate writer, such that, even when you don’t agree, he supplies very good arguments and makes his positions crystal clear.

He starts part I by setting the stage in chapter 1: if you’re familiar with AI/LLMs, this will just be a fun review. Marcus defines some key terms: artificial intelligence, intelligence itself, LLMs, and generative AI. He talks about the black box nature of generative AI, how it’s difficult to know why these systems often give the answers they do, especially when they hallucinate (produce inaccurate responses) for queries we think it really ought to know.

With the stage set, Marcus then goes deeper into generative AI. He refers to generative AI/LLMs as “authoritative bullshit” machines, since they so often give confident but wrong answers. “Chat GPT is a bullshitter” (p. 37) in the Henry Frankfurt sense of bullshit: “The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care if what they say is true or false.” Marcus then gives an amusing bunch of examples of hallucinations—which anyone will routinely encounter if they use LLMs regularly—which he notes are statistical in nature. Since LLMs use massive amounts of data to predict the next most probable word or string, it will routinely predict things incorrectly, yet not accompany its prediction with any metric of uncertainty (which especially annoys me as a statistician). Depending on the query, hallucination rates can be astronomical.

Marcus thus argues that these are clearly not the AIs we should ideally want. Despite the tremendous recent progress in generative AI, Marcus points out that the approach of “scaling”—larger datasets and more computing power—is likely reaching its limits. For the progress to continue, for hallucinations to be reduced to any significant degree, some other approach will be necessary (he’s a big proponent of symbolic AI).

Chapter 3 is all about AI risks. Marcus discusses 12 of them: disinformation, market manipulation, accidental, misinformation, defamation, nonconsensual deepfakes, accelerating crime, cybersecurity and bioweapons, bias and discrimination, privacy and data leaks, intellectual property taken without consent, over-reliance on unreliable systems, and environmental costs. These all struck me as real challenges that ought to be addressed through various forms of regulation. Marcus thinks smart regulation needs to be cultivated as soon as possible to get ahead of the proliferation of many of these issues that are already showing themselves, such as intellectual property theft and defamation.

Of note, existential risk posed by AI is not on Marcus’ list. It’s clear he doesn’t buy into AI doomerism, which posits there’s a nonignorable risk posed by superintelligent AI—if not in the near future, then more in the distant future—in terms of AI wiping us all out due to AI goal misalignment. Marcus doesn’t think this is a realistic risk (and I agree), but, to the extent that doomerism leads to regulation of more realistic risks, the net effect of doomers’ activities might be positive. Or they might distract from the very tangible, near-term/present risks that AIs pose.

Part II is all about the problems posed by an untamed Silicon Valley: how big tech manipulates public opinion and government policy. Much of the commentary is not particularly novel, surprising, or even controversial: how big tech prioritizes profits through maximizing user engagement on social media, often through deliberately manipulative design features. Marcus uses our failure to regular social media companies to argue that we shouldn’t repeat similar mistakes by allowing big tech AI companies to promulgate their systems without adequate safety precautions (he returns to this issue in part III when he discusses his policy recommendations). He then discusses the unrealistic overpromising by OpenAI and others in regards to current AI’s capabilities and the imminence of artificial general intelligence (AGI). Moreover, Marcus criticizes the tech industry for minimizing the dangers enumerated in chapter 3, comparing their evasion of responsibility to Big Tobacco’s deflection tactics used more than a half century earlier. Likewise, Marcus notes the tech sector’s aggressive AI lobbying efforts and political contributions to water down or delay meaningful regulation.

The most interesting part of this chapter for me was Marcus’ quoting of Marc Andreessen’s insincere and factually incorrect argument that regulating AI companies will stifle innovation, “‘arguing that things like ‘trust and safety’ and ‘AI ethics’ were part of a ‘mass demoralization’ campaigns aimed at ‘deceleration,’ and that anyone who worried about AI risk (and hence thinking about regulation) was ‘suffering from . . . a witches' brew of resentment, bitterness, and rage that is causing them to hold mistaken values, values that are damaging to both themselves and the people they care about.’”

Marcus counter-argues by quoting several examples from “Regulating Digital Industries” (by Mark MacCarthy) showing how regulating was crucial was for getting radio companies off the ground in the 1920s and the aviation industry in the 1930s (p. 98)—imagine these industries without intelligent regulation regarding broadcasting rights on certain airwaves and airspace coordination. As Marcus summarizes, “Regulatory capture—rules written by the very companies we need to regulate in order to consolidate their power—and inertia, in which nothing gets passed at all, are the twin enemies. And as it stands, we are losing the battle” (p. 101).

Part III comprises a bunch of shorter chapters on data, rights, privacy, transparency, and liability, followed by Marcus’ policy prescriptions to regulate the industry.

For example, re: transparency, Marcus says we, as consumers, should demand the following: data transparency, algorithmic transparency, source transparency, environmental and labor transparency, and corporate transparency. These things either don’t exist currently, or these companies’ activities/decisions in these regards are very opaque.

Far from wanting to halt or slow down AI development, Marcus simply advocates for making it work better for humanity. Expecting private companies to regulate themselves—at least without concerted external pressure—is unrealistic, at least with respect to the many risks outlined throughout the book.

This is an excellent, thoughtful book that discusses the many shortcomings of current, generative AI (which are due to their inherent architecture), enumerates many near-term and already present risks/problems, and urges thoughtful ways to tackle them. While many parts are at least somewhat arguable, the book is always an enlightening and engaging read. Highly recommended.
Profile Image for Kai.
13 reviews
October 14, 2024
Good general audience read, learned a ton of hidden incentives for government decision-makers to reduce regulations against big tech. A solid amount of quotes that really drove home the hypocrisy of many individuals at the AI regulation table.

The roadmap to what we should do as individuals are overall less concrete and more of the same to me, which I felt made the conclusion weak. The highlight among this was the proposal of establishing an agency to churn out AI regulation rules that match the speed of the field, which I hope we would have soon.
17 reviews1 follower
October 24, 2025
Concise, engaging read about the current and (likely) future perils of AI. Not doomsday, but the tech bros are in need of some transparency, accountability and regulation.
Profile Image for LeastTorque.
954 reviews18 followers
November 23, 2025
“It is no exaggeration to say that choices in the next few years will shape the next century.”

The author wrote this in 2024 before the election. So I guess the next century is getting shaped into a techno dystopia unless something major happens to unshape it.

I checked out his sub stack and he’s still fighting the good fight and generative AI is getting its ass kicked a good bit this month (November 2025).

Well, we’ll see what happens next. Meanwhile, this book gives thorough coverage to the “negative externalities” of LLMs and some ideas for fighting them.

I wish I had more hope that any of them will ever happen. But we all have to try. Write your letters.

Profile Image for Jung.
1,937 reviews44 followers
February 28, 2025
In a world increasingly dominated by technology, the subtle influence of artificial intelligence has permeated nearly every facet of our lives, often without us realizing the extent of its reach. When you unlock your phone, check your emails, or scroll through social media, invisible algorithms continuously tailor your experience by suggesting content and modifying what you see. Gary F. Marcus’s book, "Taming Silicon Valley: How We Can Ensure That AI Works for Us", invites readers to explore the real capacities of today’s AI systems and to understand the inner workings behind their smooth interfaces and persuasive marketing. The work underscores the importance of becoming vigilant about the technology that shapes our perceptions, urging us to develop critical awareness, learn to spot AI-generated misinformation, safeguard our personal data, and choose our digital tools with care. It asserts that a combination of knowledge and proactive measures is our best defense against the risks of manipulation by powerful tech giants.

At its core, the book argues that the promise of artificial intelligence is marred by significant limitations that can lead to dangerous consequences. The narrative details how AI systems, such as popular chatbots and creative image generators, rely primarily on statistical predictions rather than true comprehension or reasoning. This means that while these systems can generate impressively fluent text and striking images, they often commit glaring errors, sometimes fabricating information with unwarranted certainty. An example is provided through a well-known chatbot that, despite its polished dialogue, produces misleading or outright false statements, such as creating fictitious scandals or making basic miscalculations about weight comparisons. Visual AI systems, too, are shown to falter when tasked with producing coherent images that align with human expectations. One incident mentioned involves an AI misinterpreting the delicate concept of a compassionate embrace between a wise elder and a mythical creature, resulting instead in an image where the unicorn’s horn inflicts harm. These instances are not just technical blips; they highlight fundamental shortcomings in current AI—shortcomings that can have real-world repercussions.

The discussion then turns to the broader consequences of deploying such unreliable technology. The book makes it clear that when flawed AI systems are integrated into critical sectors like law and medicine, the stakes become alarmingly high. Legal AI tools have, on occasion, cited non-existent court cases, forcing legal professionals into embarrassing corrections and apologies. Similarly, medical chatbots, tasked with offering health advice, have been found to deliver accurate information less than half the time. These examples illustrate that the shortcomings of AI are not limited to theoretical discussions but have tangible effects that can undermine trust and even endanger lives. Economic pressures in the tech industry often push companies to prioritize quick deployment over the necessary, careful development of robust AI. The drive for rapid scalability frequently overlooks essential improvements in machine reasoning and comprehension, paving the way for a host of social risks—from manipulated elections to privacy breaches and more.

The risks extend far beyond isolated technical errors. As AI systems become more pervasive, the speed and sophistication with which they can generate misleading content pose significant threats to public trust. The book recounts incidents during elections where fabricated audio recordings were circulated, falsely alleging electoral fraud. Such deceptions have not only the potential to disrupt democratic processes but also to redefine the nature of information warfare on a global scale. On the personal level, criminals have exploited advanced voice replication and video synthesis technologies to conduct elaborate scams, deceiving even seasoned professionals with fake calls that mimic trusted voices. This capacity to convincingly imitate reality has far-reaching implications, from ruining reputations to causing significant financial losses. Additionally, the proliferation of deepfake content has made it disturbingly easy to undermine personal dignity and manipulate public figures, creating a climate of distrust that extends into schools, workplaces, and even among family members.

Another major concern is the way modern AI operates as a vast network for gathering personal data. Every interaction with an AI system becomes a potential source of data that can be exploited without the user’s knowledge or consent. Car companies, for instance, can now track detailed aspects of an individual’s daily life—ranging from their routes to personal messages and other sensitive information—often without clear rules or proper authorization. Instances have even emerged where conversational AI has inadvertently shared private exchanges with unrelated users, blurring the boundaries between public and private information. Such breaches of trust underscore a deeper issue: the technical flaws seen in AI are not just isolated errors but are symptomatic of a broader pattern of corporate practices that prioritize profit over ethical responsibility.

Central to the book’s thesis is the manipulation exerted by Silicon Valley’s tech giants. These companies, which once touted idealistic principles such as “Don’t be evil” or the pursuit of public good, have gradually ceded ground to profit-driven motives. Major players in the tech industry have shifted from initial promises of innovation for the benefit of all to strategies that favor market expansion and shareholder value. The narrative reveals how complex corporate deals, strategic public relations campaigns, and even the manipulation of governmental policies serve to mask the inherent risks of AI while pushing potentially dangerous systems to market. This corporate maneuvering is not confined to a single company; it is a systemic issue where the manipulation of public opinion, academic research, and political lobbying converges to create an environment where the true implications of AI are obscured. The revolving door between governmental positions and tech leadership further complicates the issue, as conflicts of interest compromise efforts to impose meaningful oversight.

Against this backdrop of corporate influence, the book outlines three essential protections that are urgently needed: data rights, privacy protection, and transparency. These safeguards are presented as the only viable means to ensure that AI serves public interests rather than merely boosting corporate profits. For instance, data rights are likened to a system of royalties in the music industry—small, recurring payments that acknowledge and compensate the use of personal or creative work in AI training. In parallel, robust privacy measures are needed to prevent the unauthorized collection and misuse of personal information, while transparency requirements would force companies to disclose details about their data sources, testing protocols, and any shortcomings in their systems. This trio of protections would create a framework in which companies are held accountable for the impacts of their technology, potentially slowing down the pace of development but resulting in systems that are safer and more respectful of human rights.

Moreover, the book stresses that meaningful change is not solely the responsibility of governments or corporations; it also requires active participation from the public. The narrative highlights inspiring instances where citizen movements and grassroots organizations have successfully challenged corporate overreach. One striking example involves a controversial urban development project, where persistent local activism led to the cancellation of a large-scale initiative driven by a tech behemoth. Other cases illustrate how coordinated consumer choices and professional boycotts have compelled companies to reexamine their practices and implement safer, more ethical technologies. These stories reinforce the message that collective action, whether through protests, public debates, or informed consumer behavior, is a powerful tool in countering the negative influences of AI.

In examining the current landscape, the book also acknowledges that while critics argue that stringent regulations might stifle innovation, history shows that well-designed safety measures can actually foster better outcomes. Just as safety regulations in the pharmaceutical industry have led to the development of safer medicines, robust oversight in the tech industry can guide AI development in a way that upholds fundamental human rights and minimizes risks. The potential for AI to improve lives is immense, but only if its development is accompanied by clear, enforceable rules that protect individual freedoms and ensure accountability.

In conclusion, "Taming Silicon Valley" by Gary F. Marcus presents a critical examination of the promises and perils of contemporary artificial intelligence. The book makes a compelling case that while AI has the potential to revolutionize how we live, it is currently undermined by significant technical flaws and a corporate culture that prioritizes profit over public welfare. Through a combination of detailed analysis and real-world examples, the work reveals how AI systems often fail to understand context or reason logically, leading to errors that have far-reaching consequences. At the same time, it exposes the manipulative tactics employed by Silicon Valley to maintain control over both the technology and public perception. The author argues that the solution lies in a tripartite approach—upholding data rights, enforcing privacy protections, and ensuring transparency in AI development. Equally important is the role of informed and organized citizenry in holding tech giants accountable. The future of AI is not predetermined; it will be shaped by the choices we make today. By demanding rigorous safeguards and taking collective action, society can steer AI toward outcomes that benefit everyone rather than serving narrow corporate interests. The book leaves us with a clear message: to harness the true potential of artificial intelligence, we must act now, armed with knowledge and a commitment to ethical principles, so that technology ultimately works for us, not against us.
Profile Image for Richard F.
141 reviews2 followers
December 6, 2024
An important book, but perhaps a bit to academic and it will be mostly interesting to see how the suggestions play out over the next few years.

Gary Marcus is a respected figure in the field of AI, and I listened to a couple of podcasts featuring him alongside this book. Overall he speaks a lot of truths in my opinion and my views are aligned with his.

AI is and will be a valuable tool for humans but there is a great risk that comes with a tool this powerful. The underlying theme is that the risk is mainly reflected in the way that Social Media crept up on society and steps taken by government to ensure that it was provided responsibly were too little too late. Society in short wasn't prepared.

AI represents a similar 'threat' in that its roll-out is driven by Silicon Valley companies and they in turn are driven by investors and shareholder profits. It's an unfortunate fact that making millions with new technology has become a 'normal' goal, and usurps benefitting society in general.

Marcus's book is a deep dive into this and is well presented, even thought it is really in the form of an expanded essay. The final part details steps that Marcus thinks need to be taken to maximise the benefits to society over the cash ending up in investors' pockets, but this is where the book reads a bit dry. It order to get the required buy-in I think he could have done a better job on writing this section.

However this is a prescient and important book at the current time. We will see how it holds up in the next few years.
Profile Image for Jessica Imhauser.
89 reviews
January 5, 2025
Marcus’ book is split into three sections, the first describing the nature of Language learning models and tech industries, as well as risks associated with today’s AI models, the second describing how AI and associated tech giants have manipulated the public and government, and the third being a call to action, proposing several measures to reform the evolution of AI to promote the best interests of all of society.

I liked that Marcus used real world data or events for nearly every point he brought up, to show that his concerns for Generative AI have already manifested themselves and prove the chaos that is going on behind the scenes of the glamorized AI models. Marcus also brings up the words and promises of tech celebrities like Zuckerberg, Altman, and Musk, and follows through on the reality, proving that they cannot always be trusted.

I was generally aligned with Marcus’ POVs throughout the story, especially the need for consent and compensation of works used to train AI models, and of needing to reshape the way AI systems function so that they can harbor true intelligence for long-term success rather than the exploitation of privacy and data for monetary gain. There were some views I was skeptical of, but Marcus always provided enough evidence to back up any arguments I may not have agreed with. Overall, it was a worthwhile read, and is a great overview of the issues with today’s AI systems.
Profile Image for Anthony.
278 reviews16 followers
February 22, 2025
Very readable, incomplete, and sometimes wrong. Taming Silicon Valley was published in 2024 (i.e., eons ago), but the pace of GenAI development means that content a month old has already gone stale. So there are some big developments that are missing from the book which was a rush job of Marcus' blog posts and other public writing, kinda slapped into an opportunistically orchestrated manuscript. He's just seizing the moment but at least he's not a hype-man. Quite the opposite - the book is replete with examples of tech overselling and under-delivering, of the social and environmental costs of AI, and the 'marvel' of LLM output all based on IP theft. Essentially AI companies are selling you a transmogrification of all the books, movies, audio, online content, Facebook uploads, etc. with not a single dime given to all the creators who in any other state of property theft would be suing for millions.

What Marcus gets wrong: assuming more of a threat from open-source than proprietary models. With the latest DeepSeek development, it's clear that training time, energy demand, and model capacity can all be improved by refining existing infrastructure instead of building from scratch because every company has moated off their models.

Marcus has an active online presence, so if these topics interest you then you'd be well served by subscribing to his Substack.
37 reviews
November 22, 2024
An excellent book covering both the technical and societal challenges that comes with big tech and their current focus on LLM (large language models). LLMs have issues with reliability, which isn't being addressed properly in the public debate, despite the majority of companies looking to include them in their business processes.
The first part of the book covers these technical shortcomings and the difficulties with fixing them given the current approach to AI. The second part discusses how AI should be regulated to ensure more reliable and fair AI.

The author discusses how deep learning (and by extension LLMs) has overshadowed other, more logical and predictable types of AI, with the majority of research funding now going to deep learning. That's a shame since both types of AI are likely necessary to achieve safe and reliable AI. As the author points out, it's jarring how billions of dollars have gone into building LLMs and yet the problem of hallucinations have yet to be solved, which undermines the trust we can put into them.

I found the second (political) part to be a bit too verbose. I really admire the book overall (and the fact the author was able to write such a comprehensive book in such short time)
Profile Image for Antonia.
8 reviews
December 8, 2025
For all its urgency, this book feels more like a siren than a solution.

Marcus, a cognitive scientist known for his skepticism of AI hype, paints a dystopian picture of Silicon Valley’s unchecked power. He argues that tech leaders prioritize disruption over responsibility, and that their engineering-first mindset has led to a host of societal problems: from misinformation to surveillance to the overpromising of artificial intelligence.

While these concerns are valid and increasingly relevant, the book leans heavily into alarmism and it almost reads to me like sensationalist journalism. Marcus’s tone can feel exaggerated, with sweeping generalizations about the intentions and competence of tech companies. The nuance often gets lost in the rush to warn readers of impending doom.

The proposed reforms including more regulation, ethical oversight, interdisciplinary collaboration, are sensible but not particularly novel (that may also be since I'm reading this in December 2025). Without a clear roadmap for implementation, they come off as idealistic rather than actionable.
Stylistically, the book is readable but not especially engaging. It lacks the narrative drive or fresh insight that might elevate it above other critiques in the genre.
Author 1 book12 followers
January 18, 2025
Taming Silicon Valley brought both joy and anger to me. The book is incredibly well-written and the voice of Gary Marcus is dynamic and to the point. Usually that matters more for fiction novels but it helps this non-fiction because it reads as a thrilling expose of all the shady stuff Silicon Valley has been up to.

I have talked about these issues since the late 2010s, quit my job to write a sci-fi novel exactly on these topics, but I felt like crazy because no one seemed to get why I thought unbridled big tech is the road to an actual dystopia.

I felt incredibly happy to get my non-expert opinions validated by an actual expert, Gary Marcus, who proves his points with numerous quotes, news reports, and personal experience.

But the book also angered me towards Silicon Valley, Governments, and us, the consumers. With recent geopolitical developments, I agree with the author that Big Tech must be regulated ASAP through independent institutions because the technological potential to do harm is already manifesting.

In short, this relatively short book proves its points stylistically, factually, and emotionally.
9 reviews
February 25, 2025
(4.5 stars)

Author makes an incredibly compelling case for further regulations on AI companies. Normally I don’t agree with the idea that if we don’t act on something immediately, we are altering the course of many decades to come but it seems pretty clear here. Every time he makes an argument it is clearly communicated throughout the brief chapters and his focus never wanders. The entry point to the book is low - if this is your first time learning about how generative AI works he does a great job of introducing it and then describing it in detail.

My only minor critique would be that some of the examples used ended up being blog posts and (very long) twitter threads. Obviously the book needed to be written quickly considering how the landscape can change in a few months, but I wish there was more substantial source material at certain points.
Profile Image for Synthia Salomon.
1,225 reviews21 followers
March 1, 2025
Big changes

“artificial intelligence, while promising revolutionary advances, currently operates as a deeply flawed technology that threatens our privacy, security, and ability to trust what we see and hear.

The problems run deep – from AI systems that make basic logical errors and spread misinformation, to tech giants which prioritize profits over safety while blocking real oversight. Yet there’s hope through collective action, as shown by successful citizen movements that have forced changes in AI development. Through consumer choices, organized activism, and public participation in policy discussions, we can still shape how this technology evolves – but only if we act now while we still have the chance to ensure AI serves the public good rather than just corporate interests.”
This entire review has been hidden because of spoilers.
Profile Image for Diego Dotta.
252 reviews9 followers
December 18, 2024
This book dives into how fast AI is changing and what that means, especially in the constantly shifting scene here in Silicon Valley. It brings up a really interesting question: how do you even capture the vibe of a tech that changes so fast? It’s kind of funny to think about publishing a book on AI in 2024, since the stuff inside can become outdated super quickly, especially when talking about the challenges AI throws at us.

Still, the book has some insights that make you think about our part in this tech revolution. It gets you to ponder whether you’re part of the problem or part of the solution, 🦄🤓
Profile Image for Désirée.
67 reviews5 followers
August 20, 2025
Ok book if you’re looking for an introduction to the current state of the AI industry, the threats that AI poses to our society and how to solve them. Very simplified, maybe too much. There’s a whole lot of repetition and I often felt like the book was just clarifying things that most of us have already heard or that could be intuitively understood just by using AI. Most of the new and more specific information were about the current legislation on AI which I personally didn’t care about that much.
Profile Image for Christoph Kappel.
490 reviews12 followers
March 10, 2025
This whole books sometimes read like an article from Cory Doctorow: It is really well written, each point is easy to grasp, there are lots of examples from the real world and highly dystopian and sometimes sarcastic - loving it!

The funny thing is the author is normally a evangelist for AI, here is highly discourages from the current course of genAI - which basically matches my own mindset about the whole bubble right now.
Profile Image for Mikhail Filatov.
392 reviews19 followers
November 14, 2025
More like a manifesto and is written in a hurry (as the author himself said) after his involvement in Congress debates about AI.
It is also not clear whether the author believes in Gen AI or not. From one perspective he points a lot of limitations but at the same time he sees huge risk for jobs, why is that?
I heard about him as a great “AI skeptic”, but he didn’t really presented his point convincingly.
Profile Image for Kathy Buchko.
150 reviews3 followers
November 10, 2025
An important, informative book about the risks we face as a nation if we do not carefully regulate and monitor the actions of Big Tech companies, specifically in regard to AI. The author, Gary Marcus, makes a strong case for the need for an independent watchdog "AI Agency" to protect the privacy and safety of U.S. citizens.
Profile Image for Christian.
177 reviews37 followers
November 29, 2025
A good manifesto that should be approachable to those outside of tech. I’m skeptical there exists many people who are genuinely curious enough about the downsides of AI that will read this. And those in the industry likely would know all of what the author discusses.
Profile Image for Jarkko Laine.
760 reviews26 followers
November 26, 2024
Very useful read in the day where AI seems to be everywhere. Marcus makes a compelling case for asking for more and demanding safe, reliable, and fair AI.
Profile Image for Dani Ollé.
206 reviews8 followers
December 1, 2024
A detailed, activist but reasoned, highly readable call for AI regulation
Profile Image for Petra Vizjak.
10 reviews1 follower
February 9, 2025
The author has absolutely convinced me in need for compliance and regulative in field of AI. Highly recommended!
Profile Image for James G..
462 reviews4 followers
May 26, 2025
Truth-telling good book, but tough to digest in the most present. As writing about the tech can’t keep apace, it seems neither can the nefarious forces of power and capital.
Profile Image for Angel Grimalt.
129 reviews9 followers
January 15, 2025
A key reading for the context of AI governance

Gary Marcus delivers a raw and critical examination of the current state of AI and the companies that develop it and promote it. He underscores that most of these companies are motivated by incentives aligned with profitability and growth at the expense of societal well-being. Instead of benefiting society, they amplify among many problems, surveillance capitalism and the attention economy, leading to increased misinformation, polarization, and negative impacts on users’ mental health and cybersecurity. He highlights how the rapid pace of AI implementation surpasses safety measures, driven by growth and profits.

Backed by extensive documentation—including academic articles, tweets, news reports, and other books—Marcus reinforces each point he raises with solid evidence. He presents the main risks that this globally scaled technology brings, affecting society at every level.

Offering his perspective on the moral and ethical decline of Silicon Valley, Marcus reflects on how it was originally founded with the purpose of positively impacting the world.

His criticism is harsh toward big tech and venture capitalists who push the frontier without a responsible visions and without true containment measures. Marcus argues that there are more promises than realities in the AI industry, with hype often exceeding concrete results.

Finally, he provides a guide—a set of demands—that we must make to pertinent actors such as the government and legislators to regulate this industry and make it responsible, avoiding impunity over the direct negative effects and externalities it causes.

We are at a critical moment where each and every one of us must take action—from activism and communication to voting in elections for politicians who understand the worrying inflection point at which we find ourselves as human society.
Displaying 1 - 30 of 30 reviews

Can't find what you're looking for?

Get help and learn more about the design.