Jump to ratings and reviews
Rate this book

AI Needs You: How We Can Change AI's Future and Save Our Own

Rate this book
A humanist manifesto for the age of AI

Artificial intelligence may be the most transformative technology of our time. As AI’s power grows, so does the need to figure out what—and who—this technology is really for. AI Needs You argues that it is critical for society to take the lead in answering this urgent question and ensuring that AI fulfills its promise.

Verity Harding draws inspiring lessons from the histories of three twentieth-century tech revolutions—the space race, in vitro fertilization, and the internet—to empower each of us to join the conversation about AI and its possible futures. Sharing her perspective as a leading insider in technology and politics, she rejects the dominant narrative, which often likens AI’s advent to that of the atomic bomb. History points the way to an achievable future in which democratically determined values guide AI to be peaceful in its intent; to embrace limitations; to serve purpose, not profit; and to be firmly rooted in societal trust.

AI Needs You gives us hope that we, the people, can imbue AI with a deep intentionality that reflects our best values, ideals, and interests, and that serves the public good. AI will permeate our lives in unforeseeable ways, but it is clear that the shape of AI’s future—and of our own—cannot be left only to those building it. It is up to us to guide this technology away from our worst fears and toward a future that we can trust and believe in.

288 pages, Hardcover

Published March 12, 2024

81 people are currently reading
722 people want to read

About the author

Verity Harding

1 book9 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
42 (17%)
4 stars
88 (36%)
3 stars
83 (34%)
2 stars
23 (9%)
1 star
6 (2%)
Displaying 1 - 30 of 52 reviews
Profile Image for Jason Furman.
1,408 reviews1,654 followers
March 5, 2024
There are lots and lots of opinions on policy towards and governance of AI. A lot of those opinions are based on recycling the same sets of arguments or facts. Some of those opinions are that others should not have opinions on these matters. Now enter Verity Harding, who has worked in government, industry, and at universities, with a book that is truly additive by bringing new ideas and insights to bear into what is already starting to feel like an old debate. It is also a really fun and stimulating read.

The bulk of Harding’s book is a history of the governance, mostly by government, of three postwar technologies: space exploration, IVF and human embryonic research, and the internet. Each of these are interesting in their own right, filled with lively characters, big stakes, and something that is much harder ex post—a sense of the many different, and worse, possibilities and paths that were not taken because of the choices that were made.

What emerges is a subtle interplay of contingency, individual government actions, the importance of ethics as a North Star and motivation, diplomacy (in some cases), and also the participation, and in some cases, centrality of businesses. The result was a treaty that space should be disarmed, a broad societal consensus in the UK on embryonic research, and the extraordinary rise of the internet as a global system that is not controlled by any one country or corporations (in part because of wise choices made in the United States).

Harding links each of these histories to their relevance but also limitations for thinking about AI. The individual histories are bracketed by a discussion of the rise of digital platforms in the Bay Area, Harding’s thrill and disappointment with them, and then a discussion of what lessons we should take from all of it.

Harding’s commitment is not to a specific policy but instead to a process that respects the importance of government but also the essential role of business, the need for ethics on the part of both players, and a passionate belief that “you” have a role to play as well.
Profile Image for Kate.
471 reviews148 followers
April 15, 2024
3.5 stars.

AI feels like an afterthought in this book, and using it in the title feels a little click-bait-y. The AI components are really only in the intro and conclusion sections, with a few random sentences sprinkled throughout. And the content that was AI-focused is pretty common knowledge at this point, rather than offering new insights.

That said, it is a very interesting book nonetheless. I learned a lot about the history of technological innovations and the role of governmental oversight and policy, particularly about things that were before my time (like the space race).

But, I do think the title of this book is a little misleading. If you're looking for a book about AI, this isn't it.

If this book had a different title, it would be a 4-5 star book. But it feels like the author wrote this book and then decided to inject a little bit of AI in to kind of relate to her thesis to make it more marketable.

Thanks to Libro.fm and Princeton Audio for the ALC.
Profile Image for Yasemin.
79 reviews7 followers
October 17, 2025
Quite thought provoking set of opinions regarding governance of AI and politics behind it as well as informative content on how governance evolved/evolving for some other technological developments.
Profile Image for Susan.
220 reviews
August 26, 2024
A very interesting read about how democratic countries arrived at policy decisions for past novel technologies: Space Race, IVF, and the internet, and through history, explores the path forward with AI policy making.

MLK warned against the dangers of equating scientific and technological progress with human advancement, because we also have to develop morally and spiritually to keep up. The examples in the book drew on the lessons learned from the past, and urges everyone to participate and provide input for AI policies.

The author is quite biased against the tech industry for being able to do what’s best for the broader society, and tries to counter the stereotypical image of the government which is often seen as slow, bureaucratic, and not able to understand the deep technical aspects of new developments. I actually would like to see more examples where the government botched a past policy and what we can learn from that, instead of only positive examples.
Profile Image for Rafael Angelo.
25 reviews1 follower
March 21, 2024
I enjoyed the reading, but the book does not accomplish what’s intended in my opinion.
The book details three technological breakthroughs (IVF, the internet/ICANN, and Space Race), which are interesting topics nevertheless.
However, the book is very light on what’s supposed to be the centerpiece of the narrative - AI.
I think the author could have had a stronger view on what possible the solutions for managing the AI technological advancement are.
In the absence of that, the book becomes a rear-mirror-view description of other technology trends, which are not necessarily even a good proxy for what’s happening with AI.
Profile Image for Mike Scialom.
Author 3 books5 followers
June 16, 2024
'AI Needs You' is a fascinating read, though not necessarily for the reasons you might expect – what the title suggests isn’t quite what you actually get.

That isn’t to say this is a bad book – quite the contrary, it’s a detailed and informative guide to the history of how the internet was set up, from the earliest days of computing in the 1950s to today’s conundrums.

The historical theme-setting compares the arrival of AI to other transformative innovations such as IVF, the space race – the author applauds ‘the Magna Carta of space’ which disallowed territorial claims to the Earth’s atmosphere – and the nuclear age. The scene-setting is so comprehensive that it takes up most of the book before we arrive in the present. If it had been called ‘AI: How did we get to where we are today?’ then it would get nine out of 10. But it isn’t, it’s called AI Needs You.

Author Verity Harding, director of the AI and geopolitics project at the Bennett Institute for Public Policy at the University of Cambridge, is no ordinary academic. Back in 2013, she was a SPAD (special advisor) to Nick Clegg, the deputy prime minister at the time. Then she left to join Google, where Larry Page was her new boss. She spent a decade at Alphabet, latterly as DeepMind’s first global head of public policy, where in 2017 she co-founded the company’s research and ethics unit, as well as the independent multi-stakeholder organisation, Partnership on AI.

Indeed, her book offers many insights into the mindset of a Silicon Valley-enabled writer, since she is closer to AI’s unstoppable momentum than one might at first presume...

The mores of Silicon Valley are adhered to in 'AI Needs You' – a determination to be positive, with little or no mention of topics that would upset the world’s most sensitive cadre of entrepreneurs. Ergo there’s nothing on why big tech should be paying more tax, or why they should invest in a massive mental health programme to help the millions of workers set to lose their livelihoods to AI – and no serious suggestions as to how governments could establish proper oversight of the AI era.

Lawmakers must find the will to confront Big Tech because AI screams ‘mortal danger!’ to the very governments that are green-lighting their use (and the democracy they depend on). However, the author does describe the rules which allow the Valley companies to do whatever they like, and acknowledges that if that seemed OK when their schtick began 20 or 30 years ago, today we know enough to realise their business models are not to be trusted.

“No doubt Silicon Valley has a culture problem,” the author writes. “Trust is waning. Greed is winning.” She describes how Amazon treats its workers – the workers are not being replaced by AI-enabled robots, rather the workforce is being rewired in a different way. “You’re sort of like a robot but in human form,” says one quote from an Amazon manager in the excellent introduction. AI needs you indeed!

We’re at a crossroads, says Verity. “Do we want to set any boundaries at all before we descend into automated, unscrutinised, unaccountable monitoring?” she asks of the current state of play – but the answer is equivocal.

We might want some safeguards for our children and for our jobs, but we’ve no idea how to create them. Verity’s suggestion is to write to elected representatives – MP, council – plus speak to school officials, and join a union to use your voice. All that is perfectly valid but the actualité of AI is that the genie is already out of the bottle, so it’s a rearguard action we’re fighting.

“But,” she writes, “we can begin to set an example for the world by establishing conditions about how AI will not be used. Quick progress could be made, for example, by cracking down on AI-enabled surveillance.” But later we read that “AI-enabled surveillance is perfectly legal in most places, and if there are governments and private clients demanding it, then more of it will get built”. So in fact there’s zero prospect of a swift ‘crackdown’ on excessive use of AI: the suggestion is perhaps a mere sop for the feeble-minded?

“Without any regulation governing their use,” concludes the author, “educational institutions, private companies and [retail] stores are free to use AI-enabled biometric data programs that can monitor voice patterns, conduct gait analysis, and analyse facial expressions.”

So in what sense can an individual make a difference? At the end of this book, it’s apparent that AI needs you as a source of data – to control, monetise and/or manipulate you. But does the AI industry need your opinions, your insights or your preferences? No, it doesn’t. Are politicians or corporations listening to your concerns? Not really.

The takeaway from 'AI Needs You' (Princeton University Press, £20) is that the challenges of AI won’t be addressed until we understand the scale of its interference in our daily lives already – and by then it could be too late.
Profile Image for Sarah Cupitt.
848 reviews46 followers
December 23, 2025
Good read for BT - 3.5? Rounded to 4 stars

AI is likely to evolve in ways we can't fully predict, requiring ongoing ethical reflection and adaptation. In order for AI to reflect the best, instead of the worst, of humanity, the conversation must include everyone. How might AI challenge your current understanding of intelligence, creativity, or decision-making? What ethical considerations should guide its development? How can we ensure that AI benefits are distributed fairly across society?

Artificial Intelligence is rapidly transforming our world, offering immense potential for progress in fields like healthcare and climate science, while also posing risks like privacy invasion and bias amplification. Just like previous technological revolutions, public engagement is crucial in shaping AI's development to align with societal values and ethical considerations. Everyone, regardless of technical expertise, has valuable insights to contribute to the AI conversation based on their everyday experiences with the technology. By staying informed, participating in public discussions, and making conscious choices about data usage, individuals can play a significant role in ensuring AI benefits society as a whole rather than exacerbating existing inequalities.

Notes:
- everyone has a role to play in shaping its future
- From the algorithms that curate our social media feeds to the chatbots that answer our customer service queries, AI is already woven into the fabric of our daily lives. And this is just the beginning.
- As AI systems become more sophisticated, they hold the potential to revolutionize healthcare, accelerate scientific discoveries, and address urgent global challenges like climate change. Yet with great power comes great responsibility — the decisions we make today about AI development will shape the very future of humanity.
- the enormous potential for this revolutionary technology to amplify existing inequalities and societal faults.
- In other words, Artificial intelligence can reflect both our human aspirations and our flaws

Consider how AI algorithms already shape your online experience. They curate your social media feed, recommend products, and even influence the news you see. While this personalization can be convenient, it also creates echo chambers, reinforcing existing beliefs and deepening societal divides.

In the job market, AI-powered recruitment tools promise efficiency. However, if they’re trained on biased historical data, these systems might perpetuate discrimination, favoring candidates who fit traditional molds and overlooking diverse talent.

Healthcare AI shows similar duality. It can analyze medical images with incredible accuracy, potentially saving lives. Yet, if the training data lacks diversity, these systems might be less effective for underrepresented groups, widening health disparities.

The author argues that recognizing this shadow side is crucial. By acknowledging AI's potential to mirror and magnify human flaws, we can work to counteract these tendencies. This means diversifying the teams developing AI, carefully scrutinizing training data for biases, and implementing robust ethical guidelines.

As AI becomes more integrated into your daily life, you have a role to play too. By staying informed, questioning the AI systems you interact with, and advocating for responsible development, you can help shape a future where AI amplifies our best qualities, not our worst.

In agriculture, AI is helping farmers optimize crop yields and reduce water usage. Imagine drones flying over fields, using computer vision to identify pest infestations or areas of crop stress. This technology allows for precise interventions, increasing food production while minimizing environmental impact.

Climate science is another area where AI is making waves. By analyzing vast amounts of data from satellites, weather stations, and ocean sensors, AI models can predict weather patterns and climate trends with unprecedented accuracy. This information is crucial for developing strategies to mitigate and adapt to climate change.

Even in creative fields, AI is pushing boundaries. You might have seen AI-generated art or heard music composed with the help of algorithms. While these tools won't replace human creativity, they're opening up new avenues for expression and collaboration between humans and machines.

However, the author cautions against viewing AI through rose-colored glasses. For every exciting advancement, there are potential pitfalls to consider. AI systems can perpetuate biases, invade privacy, and be used for surveillance and manipulation. She argues that a balanced perspective is essential as we navigate this technological revolution.

As AI becomes more prevalent in your life, your perspective and experiences are valuable in shaping its development.

The power of AI-driven surveillance lies in its ability to process vast amounts of data and identify patterns that humans might miss. This can have positive applications, like detecting financial fraud or predicting health risks. But it also poses unprecedented challenges to privacy and individual autonomy.

Robust public discourse, and informed regulation, are necessary to address these challenges. Just as society grappled with the ethical implications of other transformative technologies, we must now confront the complexities of AI surveillance.
Profile Image for Morgan.
27 reviews
July 9, 2024
Too politically biased for me to take seriously. The writer draws examples from history for us to learn from moving forward with AI. However, there’s just not much in there that is actually about AI.
Profile Image for Trina.
1,321 reviews3 followers
October 2, 2024
I quite enjoyed this, but...not much AI! I like the historical/revolution perspective, but expected a deeper opinion on what to do about AI based on that research.
Profile Image for Marielle.
295 reviews1 follower
April 17, 2024
1. No need to worry about AI, the awesome country USA will take care of it!
2. How? Lets take inspiration from the atom bombs story, that equal right?
3. Nvm that, lets talk about space, ivf, the internet and american politics
4. Ok so the last 25% of the book remains, lets dig into AI and talk about what the book is actually about
5. I think we should tread carefully and speak up. Good luck everyone!
429 reviews
March 13, 2024
This book read like a college thesis about the history of ethical decisions, and politics in the United States, and I found that it had little connection to the title.
Profile Image for Jung.
1,959 reviews45 followers
August 21, 2024
Artificial Intelligence (AI) is no longer just a concept from science fiction. It's a reality that is deeply embedded in our daily lives, influencing everything from the content we see on social media to the medical care we receive. In Verity Harding's book "AI Needs You: How We Can Change AI's Future and Save Our Own," the author explores the dual-edged nature of AI—its potential to transform society positively and the dangers it poses if not carefully managed.

The book begins by illustrating AI's omnipresence, highlighting its role in our digital interactions and the broader implications for industries like healthcare, agriculture, and climate science. AI has the potential to accelerate medical research, optimize food production, and provide more accurate climate predictions, leading to groundbreaking advancements. However, Harding cautions that the same technology that brings these benefits can also perpetuate biases, invade privacy, and even become a tool for surveillance and control.

A particularly striking metaphor used in the book is the city of San Francisco, a hub of technological innovation, which also faces significant social issues like homelessness and addiction. This juxtaposition mirrors AI's ability to both elevate and exacerbate societal challenges. Just as San Francisco’s gleaming skyscrapers exist alongside stark social problems, AI can simultaneously offer revolutionary solutions and deepen existing inequalities if not handled responsibly.

AI’s potential to reflect and amplify human flaws is a recurring theme in the book. Harding discusses how AI systems, if trained on biased data, can reinforce existing prejudices, such as in job recruitment or healthcare, where AI tools might inadvertently favor certain groups over others. The author argues that recognizing this shadow side is essential for mitigating the risks associated with AI. This involves diversifying the teams developing AI, scrutinizing data for biases, and implementing strong ethical guidelines. However, Harding stresses that the responsibility doesn't solely rest with tech companies—every individual has a role to play in shaping AI’s future.

The book also delves into historical parallels, such as the development of in vitro fertilization (IVF), to provide context for the ethical debates surrounding AI. Just as IVF was met with both excitement and ethical concerns, AI is pushing the boundaries of what it means to be intelligent and creative, raising profound questions about the nature of life and consciousness. Harding draws on the lessons learned from IVF to emphasize the importance of public engagement in technological advancement. As AI evolves, ongoing ethical reflection and adaptation will be necessary to ensure it develops in a way that aligns with societal values.

A significant portion of the book is dedicated to the dangers of AI-driven surveillance. Harding paints a vivid picture of a world where AI-powered cameras track movements, facial recognition software identifies individuals, and algorithms predict behavior. This scenario, which is already a reality in some parts of the world, poses serious threats to privacy and individual autonomy. The author urges readers to remain vigilant about how their data is collected and used, advocating for stronger data protection laws and privacy-focused technologies.

The book concludes with a call to action, urging readers to get involved in shaping the future of AI. Harding emphasizes that you don't need to be a tech expert to contribute—everyone's perspective is valuable. The book encourages individuals to stay informed, participate in public discussions, and make conscious choices about the AI systems they interact with. By doing so, the public can help ensure that AI is developed in a way that benefits all of society, not just a select few.

In summary, "AI Needs You" is a thought-provoking exploration of AI’s transformative potential and the ethical challenges it presents. Harding argues that public engagement is crucial in shaping AI’s development, ensuring it aligns with societal values and addresses the risks of bias, privacy invasion, and surveillance. The book serves as both a warning and a guide, urging readers to take an active role in steering the AI revolution in a direction that benefits humanity as a whole.
Profile Image for Andre.
409 reviews14 followers
February 4, 2025
Another non-technical book about AI. AI is definitely part of the zeitgeist, so you should be aware of it.

The main takeaway from this book is: You don't have to be an AI expert to have an informed opinion about AI. (But please try to get informed, going off half-baked about AI is no help.)

The author emphasizes that everyone, regardless of technical expertise, has a role to play in shaping AI's future. She encourages individuals to stay informed, participate in discussions, and make conscious choices about usage to ensure AI benefits society as a whole. Rather than just the techno-utopians who run the 5 largest AI concerns.

To provide inspiration, and to draw attention to the fact that this can be done, the author draws lessons from past technological revolutions, such as the space race, in-vitro fertilization, and the internet, to illustrate how society can guide AI's development towards a future that reflects democratic values and serves the public good.

What she doesn't draw attention to are when society has failed to do this, either by not engaging with the issue, or by being caught off guard. One example I would provide are the first 2 industrial revolutions (thus leading to climate impact, labour working conditions, etc.). But I think her approach of pointing out positive, but very challenging, examples is in line with the tone of the book. We have enough AI doomerism, one more take on it is not helpful.

However I want to raise an issue that is related to the first two industrial revolutions, and the resulting negative effects. It took a while for us to recognize and at least partially attempt to correct for them. My fear with AI is not that we'll all become pets to the AIs, but that we'll all become serfs to the techno-utopian CEO's like Musk, Pichai, Nadella, Zuckerberg, etc. It reminds me of a quote from Jim Rohn, "If you don’t design your own life plan, chances are you’ll fall into someone else’s plan. And guess what they have planned for you? Not much." Well what they have planned for us is more Surveillance Capitalism.

If AI is indeed a mirror (see my previous review) let's have it be a mirror we want to look into, not a mirror that the technocratic elite would like to hold up.
Profile Image for lou.
Author 5 books7 followers
June 21, 2025
An amazing read that covers critical thinking about the need to be involved in the responsible design and implementation of advancements in AI.

Timely. Important. Informative. Inspirational. Harding puts ‘this moment’ in AI in context with similar historic inflection points, including: the revolutionary peaceful policy-making collaboration between The US and The USSR in The Early Era of Space Exploration; The UK’s incredible advancements with IVF; and the advent of The Internet along with establushment of ICANN to keep ownership of this radical set of networked information technologies internationally open and free.

With Harding’s perspectives from over 10+ years working with AI across a wide variety of commercial organizations and her career evolution into AI governance and policy, we’re getting an essential expert voice at the most timely moment, perhaps one of the most decisive inflection points where we as a society and a civilization need to all help navigate this set of powerful tools to best serve our inclusive humanity by direct participation in the design decision, the design problem, really — how can we best use AI for positive futuring before we simply allow the technologists and builders decide for us.

AI Needs You. And by You I presume Harding means ‘Us’.

I highly recommend ‘AI Needs You’. Read it asap. Read it twice and spread the word.
Profile Image for ElenadeLucas.
22 reviews
July 2, 2024
“AI Needs You” articulates a positive, exciting and realistic vision of the future of AI, rooted in the values of democracy and human flourishing. The author illustrates a path to a future AI that is peaceful in its intent, serves the public good, and is rooted in societal trust.

From her extensive experience working at the intersection of politics, policy and technology, the author points to key historical moments of technological revolutions and draws analogies to understand how we can move forward with a positive future of AI: she covers the birth of the internet, the landing of the moon and the development of in vitro fertilization (I skipped this chapter since I did not agree this analogy was accurate, being that IVF is innately unethical, but that is not the case with AI). Sometimes this book feels like a history lesson and the author is quite bias in some of her viewpoints.

Overall I’d recommend this book! It is a call to engage deeply with our future - the success of AI will be only “be possible by a deep intention of those building it, principled leadership by those tasked with regulating it, and active participation from those of us participating in it.”
Profile Image for Anna.
29 reviews
June 12, 2024
I Want to Give Head to Al Gore

conflicted about this one. Essentially it’s a historical overview of critical points in (American and British) technological history that explains how science/policy/government interact with and affect the scope and development of said technologies. And then tries to use those examples to predict / argue how WE (extremely broadly defined) might do something RIGHT NOW (policy) to determine the direction that (extremely broadly defined) “AI” tech will go in over the next few years (hopefully the good one).

I learned a lot about policy and history (I don’t think I ever took a history class so this checks out). I do find it a bit too optimistic / simplistic at times but what do I know (I interpreted the ending of I Saw the TV Glow wrong :/). She makes the right argument, she’s qualified to do so, and the writing is good for this kind of thing. I would much rather everyone read this and shout at me over surveillance than do the “but terminator?!?!?” bit at my face over and over and over again every time someone brings this shit up.
Profile Image for David.
1,550 reviews12 followers
July 13, 2024
It's not bad, but despite the title most of the book does not actually have anything to do with AI. Rather, the author examines how other controversial technologies have been dealt with by society. The idea is that we learn how to handle messy situations with far-reaching implications. Not a bad idea, but it didn't really work.

The section on the manned moon mission didn't seem to have relevance, other than the obvious observation that if we throw a shit-ton of money and political will at a problem, we can quickly make a great deal of progress.
The section on IVF gets so deep in the weeds of internal British politics during the Thatcher Administration that it's almost jarring when 3 days later she suddenly remembers what the book is supposed to be about and hastily mentions AI again.
Then she does the same thing with Al Gore and Internet governance. At least here she's able to draw some parallels between ICANN and the possibility for a similar organisation to regulate AI.
Profile Image for Jackson Richling.
20 reviews3 followers
December 12, 2024
Insightful, somewhat entertaining, and thought-provoking exploration of the evolving relationship between humanity and our creations. The author avoids alarmism, focusing on the opportunities to create something ethically and equitably for the future. Effective anecdotes and well-researched insights, she outlines the symbiotic relationship between humanity and technology with the historical examples of the rockets that brought the US to the moon, IVF, and the early internet. This is a reminder and reinforcer that the greatest innovations are born not in isolation but in partnership with each other and the systems we create (especially considering the future and AI).
Although more questions were asked than answered, it seemed while reading.
AI was not a major topic of conversation. Although how AI was used in the narrative worked, I wanted a deeper exploration of the implications of the technology.
Profile Image for Synthia Salomon.
1,229 reviews19 followers
August 20, 2024
Dual nature
Incredible potential

“Artificial Intelligence is rapidly transforming our world, offering immense potential for progress in fields like healthcare and climate science, while also posing risks like privacy invasion and bias amplification. Just like previous technological revolutions, public engagement is crucial in shaping AI's development to align with societal values and ethical considerations. Everyone, regardless of technical expertise, has valuable insights to contribute to the AI conversation based on their everyday experiences with the technology. By staying informed, participating in public discussions, and making conscious choices about data usage, individuals can play a significant role in ensuring AI benefits society as a whole rather than exacerbating existing inequalities.”
This entire review has been hidden because of spoilers.
5 reviews
October 7, 2025
It's an interesting book on an interesting subject. It focuses on the history of policy development in some controversial and big topics, which is something that I'm very interested in in general. But that's not the reason I picked up this book. There are lots of good arguments but I don't think they're put forward in a cohesive way and in my opinion each chapter is way longer than need be, except for the last part which is actually about AI. There is also a heavy emphasis on citizen participation in AI policy, a subject that is very close to my heart, however I feel the book fails to paint a clear picture of how to actually do that. Would have been a great book with some more editing and a less misleading title. I do think Verity Harding is a great thinker tough and am curious to see more of her work.
Profile Image for Mir Shahzad.
Author 1 book8 followers
August 20, 2024
Summary:

Artificial Intelligence is rapidly transforming our world, offering immense potential for progress in fields like healthcare and climate science, while also posing risks like privacy invasion and bias amplification. Just like previous technological revolutions, public engagement is crucial in shaping AI's development to align with societal values and ethical considerations. Everyone, regardless of technical expertise, has valuable insights to contribute to the AI conversation based on their everyday experiences with the technology. By staying informed, participating in public discussions, and making conscious choices about data usage, individuals can play a significant role in ensuring AI benefits society as a whole rather than exacerbating existing inequalities.
66 reviews
May 4, 2024
Be very aware going in that this is predominantly a history of advances in science and technology that were shaped by socio-political forces. The lessons drawn from fields such as IVF and the internet are interesting tales of the compromises and decisions that shaped the world we live in today. These stories of individual stakeholders and institutions illustrate the need to bring in more voices to AI developments, which in some cases are not being done. Verity does reference governments and companies that are taking on board these lessons but for now can only provide so much detail as these actions are still playing out.
24 reviews
May 5, 2024
The book goes over three main topics: space exploration, IVF, and the internet. Each is examined by looking at the political and societal circumstances, the motivations of the government and other interested groups in setting the course, the resulting regulatory framework, and the pros and cons that can be deduced with the benefit of hindsight.

Although not all information is directly relevant to AI, it does serve to paint a more complete picture needed to understand how the AI debate is similar/different from the three examples and provides some more tangible proposals for how to move forward.
Profile Image for Karen.
327 reviews10 followers
July 24, 2024
I struggled through this book, skimming parts that became repetitive. I think the title is misleading... unless you're a government official/policy maker, who wants to learn from the international space race, implementation of IVF, the rise of the internet, and the effects of global terrorism on decision making for the for the "good of society". So, the history lessons were interesting, but the author didn't give any real suggestions of what we as citizens can do to influence use of AI by big corporations and governments, other than "tell your representative how you feel". I remain cynical and skeptical.
Profile Image for Heather.
201 reviews
August 1, 2024
This was a really great book that tackled the concerns about regulating AI, through the lens of other scientific advancements: the space race, IVF, and the internet both pre/post 9-11. There actually wasn't a ton about AI specifically, but I learns a lot about the politics surrounding these scientific advancements, and how, despite deep political division, each of these advancements were regulated in ways so that everyone involved can use them and remain fairly safe. It was a great reminder that science exists in society and cannot remain objective and neutral.
Profile Image for Mohamed.
137 reviews5 followers
February 8, 2025
The book focuses on responsible use of AI and preventing weaponizing it. This will require a clear vision and having collaborative efforts of policy makers to ensure transparency, decentralization, and putting morals and ethics at front.

The author used examples of technological advances like IVF, Internet, and space exploeation. He built an analogy of what has happened in each.

“When scientific power outruns moral power, we end up with guided missiles and misguided men.” (US civil rights leader Dr. Martin Luther King Jr.)
22 reviews
February 20, 2025
AI Needs You is an insightful read that explores the management of disruptive technology, particularly AI, through historical and political analogies. While it doesn’t teach the technical aspects of AI, it provides valuable perspectives on how its development can be guided responsibly. The book’s comparisons and case studies from past technological shifts make the concepts relatable and thought-provoking. It’s an enlightening read for those interested in the governance and ethical direction of AI, rather than the technology itself.
Profile Image for Tandava Graham.
Author 1 book64 followers
May 8, 2025
It’s an interesting premise for a book, to examine other momentous technological periods in recent history and pull out lessons for AI. But it felt much more weighted towards those other examples, with relatively little actually about AI itself, which is what I came for. I also felt that the examples were mostly about how governments and agencies and organizations were involved, and so, in spite of the title, I didn’t particularly feel that AI “needs me” personally very much. There’s a bit at the end, but it feels rather tacked on.
63 reviews1 follower
August 20, 2024
Read on 81inklist.
If you could look at AI in a closed environment, I would agree with the statements and comparisons here. But Pandora's Box has already been opened. Perhaps it would be better to compare it with the development of the atomic bomb.
Of course, in democratic systems we can restrict the democratically acting participants in this area. But what happens when companies, individuals and states do not follow democratic procedures? Just as happens when data is also being collected?
Profile Image for ellen.
62 reviews
September 23, 2024
not what I wanted harding to write; she is positioned so precisely in this industry with a detailed &high level understanding of this technology + corporate insight into what executives think it could be used for, and the book remains at the most basic understanding, not using her expertise and career history at all. suggest this to someone who's never thought about ai before even casually, if you must suggest it to anyone.
Displaying 1 - 30 of 52 reviews

Can't find what you're looking for?

Get help and learn more about the design.