As we program machines to be more like humans, how will they know what we value, if we don't know ourselves?
The notion of robots gaining consciousness is beginning to become a reality, but the future of human happiness is dependent on our ability to teach machines what we value the most today. Featuring pragmatic solutions drawing on economics, emerging technologies, and positive psychology, Heartificial Intelligence provides a road map to help readers embrace the present and better define their future. Using fictional vignettes to help readers relate to larger concepts, this book paints a vivid portrait of how our lives might look in either a dystopia of robot dominance or a utopia where we use technology to enhance our natural abilities and evolve into a long-lived, super-intelligent, and caring species.
I'm a contributing writer for Mashable and The Guardian, as well as the author of two books: Hacking Happiness and Heartificial Intelligence. The focus of my work for the past three years has been on the intersection of emerging technology and wellbeing. My goal is to encourage people to examine their lives in a purposeful way to increase their sense of worth and joy.
I'm also the founder of the non-profit foundation, The Happathon Project where we've created a free survey to help people identify, track, and live to their Values. The science of positive psychology says if you don't live to your values every day, your happiness decreases.
As a regular keynote speaker, I've done talks for TEDx, SXSW, Cisco, Microsoft, and HP, in places around the world like Milan, Munich, Stockholm, and Dubai. I have been quoted on issues of technology and wellbeing by outlets such as USA Today, US News & World Report, Forbes, FOX News, C-SPAN, NPR, Mashable, and The Guardian.
Before my current work, I was an EVP of Social Media for a top ten PR firm, a VP of business development for a tech start up, and a professional actor for over fifteen years in New York City, with principal roles on Broadway, TV, and film.
I wanted to like his book. I applaud Havens’s efforts to avoid the polarizing narratives dominate the way we cover our technological anxieties; it’s what I’ve been arguing in my research on Constructive Technology Criticism over the last year. We agree on the set of sociotechnical circumstances that threaten our agency in the form of systems that use our data and behaviors exploitatively and without any control on its collection, interpretation, or use. Havens tries to avoid technological fear mongering as much as I do, and promises a constructive path forward in his roadmap for living with new technologies. But what Haven sets out to do with subtlety and nuance, he unfortunately gives us only incoherence.
In the interest of disclosure, Havens and I have spoken about our shared interest and approaches, and he sent me a review copy of the book. After choking through first chapter I might not have given this book more of my time, except that I have a personal interest in the subject, based on my related work in an Atlantic article Havens cites, “Data Doppelgängers and the Uncanny Valley of Personalization.”
Haven’s technological subjects in question—artificial intelligence, robots, automation, and targeted advertising—are conflated beyond recognition. He muddies the waters for readers, bundling all our anxieties into one big potential AI threat. Havens’s closest attempt to define artificial intelligence focuses on the “artifice” of data collected surreptitiously about us, upon which machines that interpret our actions and behaviors and judge us. This play on the “artifice” of “artificial intelligence” might be cute and too clever by half, but is so far from an accepted technical definition of the term as to be inaccurate, misleading, and irresponsible to readers. Havens’s only nod to the more widely accepted definition of artificial intelligence—with mentions of the Turing test, and distinctions between weak and strong AI—doesn’t surface until page 33. Havens plays fast and loose with the threats these technologies pose as the lines between them blur, but takes this integration as a technologically determined given. For example, Havens takes for granted that in the future all embodied robots will be smart robots with the capacity for autonomous, dynamic thought and judgement. Are we meant to be afraid of the algorithm and the AI, or the robot as a metonymic, physical instantiation that smart system behind them?
An attempt at bringing rich interdisciplinary approaches to the social questions posed by technology, Havens’s expertise is all over the map, and nowhere at the same time. His definition of normative ethics (p.183) comes straight from Wikipedia. And his concern with measuring happiness and wellbeing through the proxy of GDP are mentioned repeatedly, without explanation or citation. Who is claiming that GDP = happiness, exactly? He reminds us throughout that he is not a psychologist, ethicist, economist, or even a technologist. So why should we spend time on his half-baked ideas that haven’t evolved beyond an incoherent bag of ideas, trendy enough to fill the Mashable articles and TED talk that preceded this book?
Havens employs future scenarios at the start of each chapter, and in theory fiction can be a very useful construct for thinking through possible impacts and our reactions to them. But Havens’s scenarios exhibit all the features of the worst kinds of moral panics that belie the current anxieties of white cis men, worried about the effect of technology on the women and children. Right off the bat, when presented with a decision to implant a life-saving brain chip to address his daughters illness, Havens considers the surveillance and potential hacking threats this technology introduces. Instead of framing it as decision he has to make for himself and his own body, he writes the scenario from the parents’ perspective, worrying over the impacts on his daughter. In another scenario, a humanoid robot comes to date his daughter and future Havens ponders how should he feel about this without being racist against robots. Structurally, Haven’s future scenarios only serve to set a tone for our anticipated anxieties, and fail to set up a moral of the story or anticipate and motivate the arguments in the chapter following. At worst, his scenarios are shamefully self-important, like when he runs into a Google PR representative at SXSW who seeks him out to convince him to stop writing about Google’s AI efforts. Havens acknowledges the arrogance in the set up, and yet publishes it anyway.
The one solution to our inevitable AI future Havens most cogently argues for is that intelligent systems need ethical oversight. I agree with him there. But he goes on to suggest that in many cases that will mean codifying our values into the technology itself. Haven admits he doesn’t know how value coding would work in practice, let alone how one would account for the diversity of values beyond Silicon Valley, in the broader population within the US, or the world. The closest he comes to a practical application of the idea is in an entirely problematic future scenario in which values are centrally managed by one corporation, “Moralign,” and applied as a service to on top of any object in ever-expanding Internet of Things ecosystem. Centralized corporate moral authority? Sure why not!
* * *
At best, the writing is cheeky and funny, but in the way that dad jokes are funny. But his humorous tone can’t make up for the awkwardly constructed and apparently unedited sentences on every page. Havens relies on sloppy crutch and filler words like “regarding” and “versus” and passive constructions so often, I hate to imagine how were already cut the manuscript before publication. For example: “It’s also compelling to unpack some insights around Son’s comments regarding people and robots.” (50) More offensive is his complete and utter lack of argumentative coherence. He jumps from one trendy concern to another, with no connective tissue showing us how these things might converge in the future.
In a recurring joke about his future job title as a “human journalist,” Havens joins the cohort of writers including Nicholas Carr and Jonathan Franzen who, when faced with the threat of algorithmically generated articles, are worried about technologies’ impacts on society for the sake of their own writing careers. But Havens ought not worry that he’ll lose his job to the robots. He should be more worried that his bad writing and incoherent arguments could stand to have been cleaned up with an automated writing assistance tool like ProWritingAid before printing this book. With sentences like, “As of late when I turn that lens on myself in relation to technology I’m melancholy versus merry.” (13) this book is no more articulate or coherent than the output of a Markov chain collection of our current anxieties stripped from the tech headlines of Wired and the New York Times.
Published in the mind, body, and spiritualism imprint of Tarcher/Penguin, this book is a reminder of ‘why we can’t have nice things’ written about technology in a flailing publishing market. It seems no one with any field expertise reviewed this book, and so Havens gets away with mashing up hot tech headlines up against some popular positive psychology, in a sloppy attempt to cobble together a guide for living with our imminent robot overlords. So this clumsy work crowds out the market for more thoughtful, better researched, and well-written work on the subject. Havens clearly means well, but this kind of quality of writing and thinking should be left to a personal blog. I share this review with a vested interest of the kind of technology writing that we are capable of producing, and that we deserve—the kind that arms us with the clarity and constructive alternatives to shape our future relationship with technology.
I wrote the book so I'm not objective. But I am really proud of it.
The fictional vignettes opening each chapter was a risk to write for me. Typically I write non-fiction but I strongly felt a reporting style for these subjects wasn't enough to project how complex and nuanced the issues around Artificial Intelligence really are. My hope if you read the book is to use those stories to help imagine how you'll deal with the inevitable rise of algorithms and robots that are already so entrenched in our lives.
As you'll see, I'm not against the robots, AI, or algorithms. However, I am for humans - meaning, however any of our skills can be replicated, replaced, or made irrelevant, I believe humans have inherent value. And I believe humans living with machines, AI, and robots can happen best when a primary factor we measure is not just productivity (how fast/well a skill can be reproduced) but how much these devices increase our wellbeing overall. That means factors beyond the financial realm.
Thanks very much for reading if you do. I worked a great deal to interview multiple thought leaders in AI, ethics, technology and positive psychology to see how we can best honor humanity while building AI. These tasks do not have to be mutually exclusive.
I will start by saying that there are many important issues involved in the development of AI that this book does not address, including the ethical issues of their use in war and crime, but that is largely because of the author's relentless optimism about humanity and the fact that it is not really related to the point of this book. While on the surface this book seems to be about the development of artificial intelligence, it is actually about pushing yourself to understand what gives you purpose in life in such a way that you are able to take control of your actions and the data they create so that machines cannot replace you or diminish your humanity in any real sense (even when they do eventually take over your job).
Havens does a good job of presenting two very different outlooks on the robots of the future - the path we are on now where the development of AI comes before the moral and ethical considerations of their impact on people, or a future in which we now, as individuals, begin to take more control of our data and make it clear to programmers and developers what parts of our lives we are willing to have automated based on a clear understanding of our values as humans so that the morals and ethics can be programmed in the AI from the very beginning and mitigate the damage that can be done. Havens does this in a way that is both entertaining and incredibly thought provoking by including fictional scenarios of the future at the start of each chapter that help illustrate the very real questions and possible outcomes that he is discussing. He also does a great job of summarizing each chapter at the end to help clarify his points so they are easy to review and remember.
My favorite aspect of this book, though, is that Havens provides actions that you can take right now to help influence the future of the development of AI and take control of your data and identity. There are no answers provided by this book because the issues it examines are both too individual and globally important to be solved any time soon, but there are still actions you can take today, whether you are a technologically inclined person or not, to help shape the future of our relationships with technology as a species.
This is an incredibly timely and important book that I am sad I did not come across sooner. Because it asks questions about what it is to be human, absolutely anyone can read, understand, and hopefully find important take away messages from it that will help your overall sense of well-being and not just your understanding of AI as it exists now and might exist in the future. I cannot recommend this book enough because I think it raises vitally important questions that everyone needs to think about and discuss with their family, friends, and community.
I rarely write bad reviews, but this one deserves one. It is apologetic, it is sensationalist, it is soppy, it does not add anything new to the discussion, it is a wanna-be science fiction trying to land into non-fiction and ethics. It is clear that the author lacks the philosophical and sociological background to discuss about these topics. I forced myself into half of the book, always expecting to find something that would potentially give me additional information. I could not find it so I decided to screen read the rest of the book. Still could not find anything.
13% in. Love this so far! Great bus reading on my brand-new phone. Or maybe it's my phone doing the reading and I'm just sitting there producing breathing/heartrate/ecg/consumer data.... (I know there is or was a thing to enter notes like this as you read, but I can't find it and every time I put something on my "currently reading" shelf, I don't finish it and it sits there forever.)
Please read this if you want to talk about the future of human society. It won't take terribly long, the ideas are presented clearly, and even if you never want to think about AI again you can still learn something about happiness.
A good guide to an important subject, suitable for anyone. John Havens writes with a light touch, but he is not afraid to tackle the serious issues here -- What will automation and data-mining do to society and humanity, and how might we respond? The book has a nicely balanced structure, with a pessimistic first half (the downsides of AI), followed by an optimistic second half (how society can develop with AI, and some suggestions for personal psychology in the modern data world). It avoids the usual movie fears of a Terminator takeover to focus on only too realistic fears of manipulation by AI advertising, and the erosion of jobs and purpose by automation. The future impacts, especially the scale and side-effects, are not fully explored -- but that may be a plus, as it avoids crystal-ball gazing: the focus is on "locally linear" extrapolation of current trends. The remedial approaches suggested are realistic.
tl:dr: AI will change society. What do we value? Let's work that out (it's probably caring for people) and make it part of what AIs do.
Heartificial Inteligence explores thorny political, economic and ethical issues raised by the rapidly approaching reality of Strong AI (or Singularity or last invention or whatever you want to call it) We already realize that AI may replace us as workers. The scarier thought is whether it will surpass people as parents, friends and lovers as well. "it's safe to assume machines will at least be able to pretend they're emphatic and emotional, which is more than a lot of us humans can do." (p 182) The solutions to this problem for Havens involve having AI inherit our ethics and values after carefully examining what they are. The solution for the economic problem is fairly simple - basic universal income. Sooner than we would like, we may have to think about what we would do if we didnt have to work for money - our values should guide this decision.
This was a rather enjoyable book that helped present some of the opportunities, potential downfalls, and dangerous threats that mankind will face as technology and artificial intelligence continues to progress.
The main ideas I take away from this book are that we should value our privacy and beware of what companies and services have access to our private information. We should also consider the moral and ethical ramifications of involving technology in our daily decisions and lifestyles.
There were quite a few fun thought experiments throughout the book that helped put the state of artificial intelligence into perspective, and to help envision where it could lead us in the future.
As a technologist in the software development industry I enjoyed this book and it’s an excellent follow up to John C. Havens' Hacking Happiness book. In particular, I like John's usage of hypothetical scenarios at the beginning of each chapter to set the tone for the point he was making. I felt John usage of these vignettes brings to life the potential moral and ethical use cases that need to be addressed when developing advanced Artificial Intelligence.
Despite possessing a liberal, progressive bias, this book has novel perspectives and good references and is worth a read/listen. Definitely provides excellent discussion material regarding "human" perspectives being incorporated into AI programming.
I enjoyed this book. It raises awareness about what’s already happening, society’s/our continued willingness to use tech despite knowing all the data being collected about our personal lives. Think about this, for example, for Facebook users have you ever or do know someone that has posted a FB comment about the creepy ads that pop up on their FB feed that seem to derive merely from a conversation, a non-technical activity (so they think)? And yet we post such comments on the very platform that is giving us the creeps, as though we’re commenting in secret 🤫😂.
But the book is way more than highlighting the dangers of tech, of our ceding control of our data to tech. This is where the ‘Heart’ part comes in. The author has the reader consider our own happiness, our personal values. Effectively, in my own words, he asks isn’t the satisfaction and happiness we gain in our life journeys in pursuit of our happiness worth the investment, the experiences and the sometimes painful lessons? Are we that willing to trust and cede control of our decision-making and our human life experiences with other humans to algorithms coded by who knows whom because it’s cool or convenient?
It’s a reminder to be human. The book acknowledges the value of AI but asks us to first embrace our humanness, “embracing our humanity to maximize machines” value to our lives.
Introduction to how artificial intelligence will change society, with brief scifi stories at the beginning of each chapter to show how people will react emotionally to A.I.. The book is split into two halves, starting with a pessimistic first half and ending more optimistically. Surprisingly, the author's proposed solutions are more political and religious, not technological. We cannot write software to get out of this mess.
I found the mix of personalized science fiction (e.g., Havens' life after transformative AI), science and research discourse, and opinions on how AI ethics should be developed to be intriguing and engaging. There was just a bit too much of the personalized science fiction for my taste though. I'd have preferred a bit more on what's happening now in AI Ethics that we could capitalize on.
This book was an interesting primer on various ethical issues as we approach the age of AI. Havens does a good job of relating current technology and extrapolating its development to something different in the future.
While he addresses potential problems if personal data/AI is left unchecked, Havens does provide some thoughts about the way forward, and injecting ethical/moral considerations into the development of the science and technology (something sorely lacking in the past). Thought experiments about self-driving cars swerving (or not) to avoid killing a child but killing its passenger(s) in the process is one example that sticks out.
Havens does a good job of summarizing his main points at the end of each chapter and then also at the end of the book--handy for a take away crib sheet. There are also links to various tests and surveys to embark on self-discovery of inherent values (which he argues needs to override GDP as measures of "quality of life."
My main complaint are the vignettes at the beginning of each chapter. They're meant to set the stage for a possible future scenario, but more often than not, they seemed to be more of an exercise in creative writing, usually because of their length.
Again, this is a good primer on the issues with leads to other, more in-depth treatments of AI and the developments of positive psychology.
Couple of years back when we were pregnant with our daughter I remember observing that my google ads were mysteriously about diapers and child care related item. Creepy! I never told them the good news! Now after reading this book, I am seriously concerned about the kind of data marketing companies have about me and my browsing trends. The time I get up, my shopping trends, my music and video preferences, my friends, my family details, the time I sleep, everything is online. While the paranoid-me wanna switch off wifi and all devices around me, the cool-me assures me that I am practically no body in the world and all those data is probably worth nothing. Big deal!
I borrowed this one from my husband's TBR list. It was an interesting read with lot of scenarios depicted to up your intrigue. But some places it became technically a little too much for me. I would go with a three for this one.
The title might seem like a bit of a corny wordplay, but I think you'd find it hard to come up with an alternative that best describes the premise of the book. Artificial Intelligence is slowly but surely becoming an inherent part of our lives, and I'd say that our situation is a bit like the 'frog in boiling water' scenario. That's not to say that we will be 'cooked' but our sensitivity to the challenge is not really at the levels it should be at. Most of the discussions are around two themes - the extermination of our species by malevolent robots, and the increasing automation of jobs and the economic and societal repercussions. Both usually end up with polarising stances. One of the reasons I liked this book is that the author is not on either of the extremes - doomsday or paradise - his approach is very pragmatic. The first six chapters take the reader through the process of understanding the lay of the land - from describing how our happiness is slowly getting defined by tracking algorithms, and the complete lack of transparency and accountability in those who have access to this data, to the economics and purpose of a human life and how it's changing, to the (seeming) limits of artificial intelligence, and finally the need to have an ethics/value system in place as we go faster in our journey of designing increasingly complex AI. That brings me to the other reason I liked this book. Every chapter begin with a fictional scenario that describes a quandary we could face as AI infiltrates our lives further. It not only adds a lot of nuance to the argument and illustrates it fabulously, but in the spirit of the book, brings out the human element superbly. The second half of the book is the author's perspective on how we can attempt to meet these challenges. This section didn't impress me as much as the first. Not because I disagree with the author on the overall direction and philosophy, but largely due to what I'd call oversimplification of the challenges. For instance, the question of designing values/ethics for AI. I think it's a hugely complex challenge because we're all uniquely different in even our fundamental perspectives on values, and I cannot see a way in which we can codify something we cannot even agree on. Call me cynical, but I also feel that the author overestimates the capacity of systems and mindsets to change. The good part is that he isn't really being prescriptive, in fact he believes that we should understand our own value systems and use that to develop our unique relationship with AI and the changes it is bound to bring. All things considered, a very good start to understanding a world that is grappling with AI.