Generative content creators. Self-driving vehicles. Predictive analytics. In the right hands, they’re beneficial to all. In the wrong hands, they amplify human bias, enable dangerous frauds, and harm vulnerable people.
Artificial intelligence is a mirror to humanity: it is forcing us to confront our worst and become our best. As decision makers and technology users, we need to use it to gain real control — instead of the illusion of control that machine learning often delivers.
Juliette Powell and Art Kleiner offer seven principles for ensuring that AI systems support human flourishing. They draw on Powell’s research at Columbia University and use a wealth of real-world examples. Four principles relate to AI systems themselves. Human risk must be rigorously determined and consciously included in any design process. AI systems must be understandable and transparent to any observer, not just the engineers working on them. People must be allowed to protect and manage their personal data. The biases embedded in AI must be confronted and reduced.
The final three principles pertain to the organizations that create AI systems. There must be procedures in place to hold them accountable for negative consequences. Organizations need to be loosely structured so that problems in one area can be isolated and resolved before they spread and sabotage the whole system. Finally, there must be psychological safety and creative friction, so that anyone involved in software development can bring problems to light without fear of reprisal.
Powell and Kleiner explore how to implement each principle, citing current best practices, promising new developments, and sobering cautionary tales. Incorporating the perspectives of engineers, businesspeople, government officials, and social activists, this book will help us realize the unprecedented benefits and opportunities AI systems can provide.
Juliette Powell is an entrepreneur, technologist, and strategist, who works at the intersection of culture, data science, and ethics. Art Kleiner is a writer, editor, and entrepreneur, with a focus on management, scenario thinking, and strategy.
Juliette Powell was an American-Canadian media expert, tech ethicist, business advisor, author and beauty pageant titleholder who was crowned Miss Canada 1989, the contest's first Black Canadian winner.
It was a compelling book about AI and the sense of control in our lives. Companies are implying that we are in control, but this book shows the downsides and the sense of control. In most cases, we are not controlling our data. Data that is being gathered via apps and phones are published and used in many areas without consent. Examples can be seen throughout the book.
Accountability and data literacy needs to be increased globally. As author says, triple A systems need to be accountable and give references, explanations. We need to be more aware of what is being shared about us and how. And this book explains it perfectly in 7 principles.
I read this book via audible, and the narration done by the author. It was nuanced and compelling audiobook that I have really enjoyed.
I wish that author could lean into more areas rather than control and data literacy. There are many areas that is affected by AI and data. AI is taking over most of our lives, so we need to be prepared.
Art Kleiner is one of the most clear thinkers on the relationship between technology, business and culture. In his latest book, co-authored with Juliette Powell, “The AI Dilemma: 7 Principles for Responsible Technology”, Powell and Kleiner develop a unique, clear picture of responsibility and its absence in today’s world of machine learning(ML). They use the phrase “algorithmic, autonomous, and automated systems” (dubbed Triple-A systems) which elevates the discussion from ML to the higher plane of disciplined societal decisions. These decisions are almost always blurred into only one of the four logics. But I get ahead of myself here. Powell’s rich background and prior works bring a framing that reflects her deep knowledge and the power of collaboration.
Although I’ve been working in the machine learning field for decades, I was moved by several major dimensions that the authors surfaced and elaborated. To be brief, 3 such dimensions were (1) the Four Logics of Power, (2) the concept of the desire for control, and illusion of control, and (3) the importance of loose coupling. All three are essential frameworks and are often overlooked in the AI discussions that are often dominated by the techno-optimism of the promise of tech, most recently large language models (LLMs) and the GPT models that use them. With respect to loose coupling in healthcare AI, as a former regulator during the ARRA/HITECH era, I saw many unintended consequences of tight coupling introduced through well-intended legislation, such as a failure to incorporate usability and safety adequately and incorporating existing user centered design principles. These risks recur with the application of AI to healthcare delivery.
The book contains eight essential chapters:
Introduction: Machines That Make Life-or-Death Choices 1 Four Logics of Power 2 Be Intentional about Risk to Humans 3 Open the Closed Box 4 Reclaim Data Rights for People 5 Confront and Question Bias 6 Hold Stakeholders Accountable 7 Favor Loosely Coupled Systems 8 Embrace Creative Friction Conclusion
In closing, the authors call out how rapidly the world of Triple A Systems is evolving, and “we may need to write a sequel.” There are aspects of human intelligence, employment, and human group theory that I would like to see expanded. A starting point would clearly be extensions from two earlier and seminal books by Kleiner, “Who Really Matters” and “The Age of Heretics”. The current book lays out a great foundation on those topics. Society does not run purely on intelligence, AI(s) or humans. There dimensions of this in every prior Kleiner book and Powell’s earlier work, and the relevance with the introduction of AI/ML is extremely relevant.
For example, another pair of authors, Daron Acemoglu and Simon Johnson contemporaneously released “Power and Progress…” which lays out the conditions under which new technology like AI helps labor, as well as the opposite. It is from an economist's' perspective. This is developed in this Powell and Kleiner book, in part under the Four Logics of Power, ‘Corporate logic’, ownership, markets and growth.’ There’s clearly an opportunity, perhaps in the sequel, to build further on the beneficent opportunities that are realistic to pursue, as a societal and wise responsibility.
"The AI Dilemma," an engrossing read, brilliantly encapsulates the double-edged sword that is artificial intelligence (AI). The book emerges as a compelling discussion on the promises and pitfalls of AI, skillfully navigating the fine line between admiration for this game-changing technology and caution against its potential risks.
The author's central thesis echoes the ancient saying of fire being a good servant but a bad master. AI, as portrayed in the book, could be the most transformative tool at humanity's disposal, or it could be its undoing, depending on how we manage it. The book explores this delicate balance, making it one of the most relevant reads in this era of rapid technological advancement.
The author’s exploration of AI’s potential is truly fascinating. The book provides a balanced view on the topic, acknowledging the myriad ways that AI can improve our lives, from streamlining mundane tasks to making breakthroughs in fields like healthcare and climate science. It paints an enticing picture of a future where AI, judiciously applied, could lead to unprecedented prosperity and progress.
Yet, the author doesn’t shy away from the darker side of this technological titan. The book thoughtfully outlines the significant risks associated with AI, specifically its potential for misuse, the ethical quandaries it presents, and the potential for it to learn and amplify our worst traits if not correctly managed. This stark portrayal serves as a wake-up call to readers about the potential consequences of unchecked AI development and use.
One of the most impactful aspects of "The AI Dilemma" is its emphasis on our reciprocal relationship with AI. The author emphasizes that as much as AI learns from us, we too stand to learn from it, thus underlining the mutual influence we and AI have on each other. This nuanced understanding adds depth to the narrative, offering insights that provoke profound introspection in the reader.
In conclusion, "The AI Dilemma" is a must-read for anyone interested in understanding the complex dynamics of our relationship with AI. The author's balanced approach, the careful consideration of both the advantages and dangers of AI, and the exploration of our reciprocal relationship with it, make this book a significant contribution to the literature on AI. It serves as a reminder that while we should embrace the potential of this powerful tool, we must also tread carefully, ensuring that we harness its power for the greater good rather than let it master us.
I received an advance review copy for free, and I am leaving this review voluntarily.
Thanks to the publisher and Netgalley for this eARC.
In the ever-evolving landscape of artificial intelligence, “The AI Dilemma” is a beacon of clarity, guiding us through the murky waters of ethical AI development. Authors Juliette Powell and Art Kleiner embark on a mission to demystify the complexities surrounding AI, offering a compass in the form of seven principles to steer technology towards a responsible future.
The book is a meticulous tapestry woven with threads of philosophy, technology, and morality, presenting a narrative that is as compelling as it is enlightening.
The authors’ expertise shines through each page, as they dissect the “Triple-A” systems—algorithmic, autonomous, and automated—revealing the human and social elements that are as integral to these systems as the technology itself.
Powell and Kleiner’s exploration is not just a technical audit of AI systems; it is a philosophical journey that questions the very essence of human control in an age where machines are increasingly liberated from our superintendence. The seven principles—risk, transparency, protection of personal data rights, accountability, structural integrity, psychological safety, and creative friction—are not mere guidelines but are presented as the pillars upon which the future of AI should be built.
The book’s strength lies in its ability to translate complex technical jargon into accessible concepts without sacrificing depth. It is a rare find that manages to cater to both the uninitiated and experts, providing a comprehensive overview without overwhelming .
One of the most striking aspects of “The AI Dilemma” is its balanced approach. The authors acknowledge the potential of AI to amplify human bias, enable fraud, and harm vulnerable populations, yet they remain optimistic about its capacity to reflect our best selves and support human flourishing.
“The AI Dilemma” is a thought-provoking read that challenges us to elevate our understanding and management of AI. It is a call to action for engineers, corporate leaders, social activists, and government officials alike to collaborate in harnessing the power of AI responsibly. For anyone looking to grasp the ethical considerations of AI, this book is an indispensable resource that offers not just food for thought but a feast.
I requested an ARC of this book through NetGalley because I feel that this is a pressing issue. AI has been around for decades, but the recent emergence of free AI art apps and ChatGPT has made it much more relevant to the average person. Whereas before, AI was controlled by technicians and software developers, now we all have access to AI and can use it creatively to further our personal goals.
This, of course, comes with some danger, and not of the "robots are going to take over the world" variety. AI vastly decreases the amount of time and resources it takes to create something, which can lead to all kinds of misinformation being spread even faster than before. Some concerns I have that this book did NOT mention specifically: book publishers being inundated with AI-written stories, students cheating in class with AI-written essays, and AI chatbots "citing sources" that are completely made up.
What this book focuses on is more of the big picture issues: who is training the AI? What data sets are we using? Who is regulating the uses of AI? What are the driving factors behind AI optimization (is the goal "profits" or the common good?)? How can we ensure that AI is not discriminatory, if it's working with a biased data set?
I think all of these are great questions, and it's important for us to think about what we've let loose with this Pandora's Box. We should all be aware of how are data is being used, how targeted ads and internet searches are shaping our lives/thinking, and how we should be holding these companies responsible for the effects of their software.
The book itself is a quick read, hitting all the major points without getting too technical and dragging. I think it's accessible to the average reader, and anyone interested in the effects of AI could get something out of it.
Well-written, insightful book about ways to mitigate the negative impact and enhance the positive impact of AI. Although the 7 Principles weren't interwoven as core pillars of the book, they tied neatly into the AI Dilemma that people and businesses are faced with, providing a useful "how-to" summary at the end.
The tough part had nothing to do with the authors or the content. Rather, it had to do with the obvious truth that CEOs and Boards of Directors are unlikely to follow these clear-headed and sensible rules for the very reasons highlighted in the book.
Organizations are structured for yesterday, not today. According to this book, research shows that those at the top listen less and less to "diverse" opinions as they get closer and closer to total command - a recipe for clunky innovation, disengaged employees (especially the smart ones), and ultimately slower growth.
Boards of directors aren't really equipped to "direct" in the age of AI, preferring to keep their seat and salary, rather than rock the boat by introducing "creative friction" into their decision making. Most boards are not organized for the rapidly changing world of today. They focus on short-term results and don't really care about the long-term (based on the decisions they seem to support). And, everyone is afraid to say anything about it because they might lose their job or their influence or they are part of the system that supports this out-of-date model (this includes the Wall Street Journal).
That's why this book is so important. It shows exactly what to do to reduce the likelihood of the (inevitable) really bad events in this explosive new world. It shows how companies can use AI much more effectively and profitably if they make necessary (but uncomfortable) organizational and process changes.
The AI Dilemma is an excellent introduction to AI for busy managers who want to understand the concept. They can then decide to study further in more depth to become fully aware of the impact of artificial intelligence on society. It is quite short, considering the subject and the details required to be covered, to get a reasonable understanding of the concept. The book is split into 7 chapters that touch upon the basics of AI. It begins with the four centers of power - institutional Vs individual and public vs private. It goes on to make a case for opening the black box, to share the logic behind all the programs and an open explanation of the logic behind the programs. This is increasingly necessary to gain acceptance of Ai systems widely. Next, the book goes on to discuss the rights to information that is shared as well as how it can be used. Besides this, how the Ai programs must be made free from bias so that this does not influence the results. The final three chapters focus on the organisational responsibilities that must be put in place to make them accountable for the results especially the negative outcomes. This is an easy read book that gives a good introduction to the concept of AI as well as the pitfalls and the precautions that must be taken to ensure that it emerges as a useful tool for humanity.
I had the privilege to meet Juliette Powell when she was the keynote speaker at the event for which I was the exec producer. I was CAPTIVATED by her seemingly boundless knowledge and just loved being in the same room with her.
I struggle to finish non-fiction at the rate I read fiction and, for that reason, find myself avoiding it. But after hearing Juliette on stage, I knew I had to pick this one up.
I’m so glad I *bought* it instead of checking it out from my library because I immediately started marking it up—highlighting, underlining, writing enthusiastic “THIS!!”s all over the pages.
What she and Art observe and propose is articulate, easy to digest, and include the little spark we need to actively demand a change in the development and governance of Triple-A systems (as they coin them).
The paragraph, Reclaim Data Rights for the People, in the conclusion have given me the rebellious framework I didn’t know I needed to use when evaluating the way I feel about AI’s ongoing proliferation.
No wonder I felt like I just finished a course. This book is an actual course at New York University. I would like to extend a special thank you to the authors for actually publishing this for everyone to access.
What did I enjoy the most?
Not just the seven principles, but a clearer understanding of the four logics of power. The different types of AI, limited-risk AI, minimal-risk AI and unacceptable risk AI.
What motivated me to read this?
AI is growing like crazy, jobs, skills and the recent headlines between Microsoft and OpenAI. A part of my professional experience is EdTech, Technical Training and Tier 1 Support. Time flies by much faster because of technology. I want to understand the world we live in and what's coming up.
Would I recommend it?
Yes I do, because this world needs knowledge, not power or control. The more you know the better. Yes I am ending with a cliche statement.
I would like to thank Book Sirens for an advanced copy of this book.
I think it is fair to say that there is a lot more that I don't know about AI than what I do know. It seems to be everywhere these days. Even last week, Congress was taking a short bus to a meeting to learn what it is about....scary.
That said, this book delves into a different area of AI focused on methods to adapt to the new reality. Being a retired guy, topics such as creating friction amongst teams are from days gone past. I did find many of the topics interesting overall and the background notes and bibliography are impressive. This book was well researched and would be more topical for someone working in today's market.
Before retiring, I was smack dab in the middle of AI, Automation, and emerging technology. Having started out with reporting and moving on to Business Intelligence, Predictive capabilities and more, it was an exciting space to be in. However my concern has always been how AI could totally go wrong if in the wrong hands and/or used in the wrong way. This book does an excellent job at addressing the use of AI and these concerns. I highly recommend this book to anyone who is interested in AI and keep in mind that AI should never have a goal of replacing workers, but rather to bring large amounts of data and knowledge together and get results more rapidly…and deploying humans to other areas where they can further grow the business and enterprise.
It was good written and neat organization, not need the technical deeply to understand the story. Every chapter focused on the other aspects of using AI, good and not good things.
Some points were highlighted for using AI to support and open capabilities through the suggestion and evaluate (hire the job starting bootcamp to evaluate the capability to match fulfillment). And another valuable resource to learn via experience.
Chapter 7 is the most important to discover the aspect of the system (loose and tight coupling). It helps to understand the more complexity of the backbox (AI Product/Solution with limited understand the codifier/programing); and the biased via training data that feed AI system to learn.
Good book to read and understand the AI revolution.
this book takes a hard look at AI as a new technology shaping the industries of the future. from all the guidelines and principles exposed, one caught my interest: the principle of loose design. the author advocates for companies designed around the principle of isolation of components - a fault in a part of the organisation should be contained and not take the whole organisation down. this design blueprint is more actual and relevant than ever in the era of AI co-existing with humans in the daily operational process.
Use cases of the opportunities and challenges for AI are in the news every day, so this was an intriguing read and its publication is timely to help make sense of it all. Overall, I really enjoyed the book. The examples were clear and relevant, the composition of the chapters and the building up of the concepts from start to finish were easy to digest and most importantly it made the urgency of action relevant and pressing.
A educated and balanced review of the current state of AI and the junction we are at. The advice is fairly limited in applicability to those in specific spaces, but even those not making the decisions have the responsibility to insist on the correct processes are followed. I learnt some things and will hopefully have a chance to use them in my tech career, but mostly I need to trust the managers, business owners and law makers will follow these recommendations.
This book is an engaging and incredibly timely exploration of artificial intelligence, one I would recommend to any business consultant, organizational leader, or concerned citizen.
Kleiner has long been an intellectual hero of mine for his ability to analyze complex, important issues and present them with clarity and unique personal insight. His previous works, Age of Heretics and Who Really Matters?, are ones I think about and use professionally all the time. Powell’s work is new to me, but given my long familiarity with Kleiner’s writings, I found the blending of two voices in this new piece refreshing. Joint authorship gives the book a more open-ended, conversational feel that was enjoyable for its own sake and also perfectly suited to the topic—i.e., the swiftly evolving landscape of AI.
Given the nature of my own work, I particularly valued how this book identifies and untangles key sector-specific and cross-sector themes related to AI. It doesn't just contribute to the ongoing, urgent conversation: it offers highly informed, immediately useful frameworks to guide it.
An important book. This book is a great read about the settings for AI, the examples of when it went wrong and the opportunities for a framework to manage the risks going forward. Enjoyable read too! Thank you to #netgalley and the publisher for an advance copy.
Some interesting content around responsible AI use. Listened to the audiobook while driving, not exactly riveting stuff, but provided some new (to me) perspectives.
*4.34 Stars Notes: I read this novel for informative research purposes. That was more than enough for me to enjoy the reading process that I had with the book.
There was a lot of scientific content that was included in different chapters, that I just really liked getting to see written in.
I read this at a lesser reading pace than I typically would have, to try to pay attention to the more important details more.
I would not have skipped reading this book for a while. I was planning on trying to find a copy, since I have been trying to research stuff related to computer science/AI. What I could read through in the amount of days that I could - was informative and helpful.
There were chapters included that had essentially enough of a page length where I never really lost focus. I’m just thankful that I found a copy of this informative novel when I could.
Very good synopsis of the true dilemma already affecting our lives -- -how does man interact with artificial intelligence? This book was concise and not overly technical. I appreciated learning and the insight involved.
I received an advance review copy for free, and I am leaving this review voluntarily.
___ Update September 2023
I decided to reread this book and would rate it even higher if I could. I think we're at a very pivotal point in our (human) relationship with AI and what we accept it should do and what it shouldn't do. So many moral aspects must be taken into account -- book does a great job at proposing the what ifs we don't want to think about but should.