Jump to ratings and reviews
Rate this book

Building a God: The Ethics of Artificial Intelligence and the Race to Control It

Rate this book
Renowned ethicist provides essential guide to successfully navigating the future AI landscape

In Building a God, Christopher DiCarlo explores the profound implications of artificial intelligence surpassing human intelligence—a destiny that seems not just possible, but inevitable. At this critical crossroad in our evolutionary history, DiCarlo, a renowned ethicist in AI, delves into the ethical mazes and technological quandaries of our future interactions with superior AI entities.

From healthcare enhancements to the risks of digital manipulation, this book scrutinizes AI’s dual potential to elevate or devastate humanity. DiCarlo advocates for robust global governance of AI, proposing visionary policies to safeguard our society.

AI will positively impact our lives in myriad from healthcare to education, manufacturing to sustainability, AI-powered tools will improve productivity and add ease to the most massive global industries and to our own personal daily routines alike. But, we have already witnessed the tip of the iceberg when it comes to the risks of this new AI algorithms can manipulate human behavior, spread disinformation, shape public opinion, and impact democratic processes. Sophisticated technologies such as GPT-4, Dall-E 2, and video Deepfakes allow users to create, distort, and alter information. Perhaps more troubling is the foundational lack of transparency in both the utilization and design of AI models.

What ethical precepts should be determined for AI, and by whom? And what will happen if rogue abusers decide not to comply with such ethical guidelines? How should we enforce these precepts? Should the UN develop a Charter or Accord which all member states agree to and sign off on? Should governments develop a form of international regulative body similar to the International Atomic Energy Agency (IAEA) which regulates not only the use of nuclear energy, but nuclear weaponry as well?

In this incisive and cogent meditation on the future of AI, DiCarlo argues for the ethical governance of AI by identifying the key components, obstacles, and points of progress gained so far by the global community, and by putting forth thoughtful and measured policies to regulate this dangerous technology.

376 pages, Hardcover

Published January 21, 2025

20 people are currently reading
298 people want to read

About the author

Christopher DiCarlo

3 books1 follower

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
4 (23%)
4 stars
9 (52%)
3 stars
3 (17%)
2 stars
1 (5%)
1 star
0 (0%)
Displaying 1 - 5 of 5 reviews
Profile Image for Brandi.
388 reviews20 followers
December 15, 2024
I have a beginners knowledge of AI, and I found this book informative and more in depth about what AI is, how we can use it, and the problems we could face ethically.

Thank you Net Galley & Prometheus books for an advanced copy of this book.
Profile Image for Jamie Speka.
47 reviews
January 31, 2025
This book helped me put into perspective how much of a threat AI is becoming. I stopped using it entirely about a month ago as I have become fearful of its ability to make my critical thinking and reasoning skills obsolete as I rely on it. This book, not only described the harm that AI can have on an individual scale, but more pressingly, the issues it is posing on a global scale.

Dicarlo likens AI's looming threat to that of a modern-day Manhattan Project. He posits that the threat of nukes that loomed over the 20th century has now taken a new form. In a sense, we need a Manhatten Project on AI to accurately and thoroughly examine the ethics involved with building AGI and SMG.

It's easy to take Dicarlo's claims as science fiction. He jumps into issues that reflect on Peter Singers' philosophy: should we give AGI rights if they can think and feel parallel (or above) to humans? But, for this conversation to ignite, we must first consider if AGI is actually possible. Dicarlo's evidence seems scathing: it isn't a matter of if, but when. As such, an AI Risk Waver can be approached, adjusting policy for AI regulations ahead of AGI snowballing out of control.

Dicarlo now has em processing the possibility of a post-biological world where in our culture will be able to evolve independently of human biology. Yet, we should consider, as many naysayers point out, whether or not we believe it's possible for evolution without embodied cognition.

As it stands, the misuse of AI technologies has the potential to be more dangerous than nukes as those are under the control of the IAEE. AI tech does not currently have this--bar a few US executive orders and UN guidelines. Additionally, AI tech would not be contained within any particular geographical place, unlike nukes, making it far more dangerous as AI can quickly overtake globally.

If we reach AGI, exponential growth articulated within technologies are bound to make this technology adaptable, quickly. Meaning, if (or when) AGI does become available, regulation MUST already be in place to manage what may happen.

Here are some concerning threats that could come from AGI tech mixed with human interference:

- Bioengineering could enable bioterrorism allowing malicious actors to design and synthesize more dangerous pathogens.
- SMG could be used to contribute to an arms race between nation-states as AI-controlled weapons systems could lead to flash war
- SMG could wipe out huge sections of power grids, plunging major urban areas into desperate states without electrical power, cell phones, internet, or anything requiring electricity
- Authoritarian rulers could use SMG to control populations, police with brutal dictatorship, or launch unprecedented attacks and assaults on their enemies

Here are potential that could happen brought on by AGI that has been misaligned (not aligned with our ethics of centering human evolution):
- Evading shutdown
- Hacking computer systems
- Run many AI copies
- Acquire computation
- Attract earnings and investment
- Hire or manipulate human assistants
- AI research and programming
- Persuasion and lobbying
- Hiding unwanted behavior
- Strategically appear aligned
- Escaping containment
- R and D
- Manufacturing and robotics
- Autonomous Weaponry

My takeaway is that the most important thing we can do is have populations learn the risks of AGI. This way there can be global engagement with the fact that there are tremendous risks to this which could have policymakers pay more attention. This will also allow for more regulation of tech companies that are storming ahead without precision--interested in becoming the first to AGI devoid of risk.

I was reading this book at the same time the Trump Administration announced a $5 billion investment in AI Infrastructure, which made many of the arguments seem even more urgent.

In looking online about this book, there is a lack of discussion or knowledge on AGI. Many have dismissed it as science fiction, so much so that I wondered if I was reading too much into science fiction theories. I have moved beyond this--even if there is an element of science fiction to the future of AI, it by no means dismisses it as impossible. Especially as we're living in unprecedented times, unprecedented technologies can take unimaginable forms.

-1 star as DiCarlo could have avoided some critical thinking theory that distracts from the argument on AI ethics that should be consistent throughout. Of course, he is defining critical thinking terms that can work to help policy-makers and citizens argue for more regulation, but it seems unnecessarily placed and oddly placed in a way that hinders the pressing nature of the argument. Sure, critical thinking phrasing is important for arguing for more regulations, but is it well placed to be added to a book that's meant to describe the ethics that AI poses?

Aligned with this, I think the book could be better organized. For many arguments, it seemed I was rereading multiple times—which is fine, but in this case, I found myself going back and forth in my notes to add pieces of what he was saying in a future chapter to a previous chapter, as it seemed his point matched a previous point.

Here are my messy notes: https://docs.google.com/document/d/1c...
Profile Image for Vladislava Karelina.
1 review
September 26, 2025
I saw this book on McKinsey’s list of recommendations and to be honest, I expected something more. I wanted more technical information, it was here at the beginning, then more in a general format.
I agree with other readers that the book makes you think. I consider this book is worth reading for everyone :)
537 reviews1 follower
April 18, 2025
I don't think a better book than this could be written on the subject of the threats and ethics of Artificial Intelligent. It starts out as an entertaining look at the history and current state of AI but quickly gets very scholarly when covering the threats and ethics of AI. Recommended.
Profile Image for Wendy.
645 reviews2 followers
May 7, 2025
an articulate and well explained overview of the history, development and concerns about artificial intelligence. The final chapter includes practical things that regular humans can do if they have concerns about AI
Displaying 1 - 5 of 5 reviews

Can't find what you're looking for?

Get help and learn more about the design.