I felt that there was something missing in this book, but I’m not sure what. Considering that the book was written by the ex-CEO of Google, Eric Schmidt, the ex-Secretary of State and National Security Advisor, Henry Kissinger, and the Dean of a Computer School at MIT, Daniel Huttenlocker, I was looking for more detail in exactly how AI is now transforming our lives and how this is most likely to play out in the future. I think that most of the information and insights they provide can be found elsewhere and so I don’t see why we needed these three obvious experts to tell us what is in this book.
The thesis is that AI will disrupt and transform human life, scientific research, education, manufacturing, logistics, transportation, defense, law enforcement, politics, advertising, art, culture and more. What do AI-enabled innovations in health, biology, space, and quantum physics look like? What do AI-enabled “best friends” look like? What does AI-enabled war look like? Does AI perceive aspects of reality humans do not? When AI participates in assessing and shaping human action, how will humans change? “What, then, will it mean to be human?” The book poses great questions but I’m not sure it answers them.
They aren’t even talking about general AI, but rather narrow AI, which is already shaping human behavior by influencing, for example, search engine results. The authors point out that these are generally “black box” algorithms, which means that the engineers who built the A.I. can’t explain exactly why Google has ranked one page higher than another. Another example is that Facebook and Twitter use A.I. for content moderation, but again engineers may not understand exactly why a certain post was deleted or flagged. Also, A.I. makes mistakes, due to a lack of data, the bias of the creator, a lack of common sense, or shallow learning. This means that A.I’s decisions need to be appealed to humans in case of wrong decisions.
AI combined with human intelligence actually makes humans more intelligent. We can reach better decisions together as the AI can factor in many more inputs than us much more quickly to give us better options than we could come up with alone. On the other hand, AI is perfectly capable of fomenting hate speech and promoting disinformation if given the wrong inputs or algorithms. How or should this be regulated? Small changes in anti-disinformation AI has vast consequences for flagged content which amounts to censorship.
There are further security concerns with the growth of A.I., particularly in warfare. Not only could there be technology like AI-piloted drones, but we are already seeing cyberwarfare, and disinformation campaigns, and AI is likely to make these even more effective. Introducing AI into warfare means introducing control of and or assistance to weapons systems to logical processes humans don’t understand. AI will be looking at variables we might not look for. and thus it is unpredictable. And because we are so interconnected, an attack could spread very quickly to things like critical infrastructure, with the risks of very rapid escalation beyond human intention or control. The book recommends states talk to each other about the strategic and moral implications of AI-assisted weaponry before it gets too advanced. However, there are problems with this, including how to verify to opposing states if limits are negotiated. How can states trust that the others are not developing capacities that they cannot trace?
From an era in which human rationality, although imperfect, has been held up as a highest value, AI will transform what it means to be human. AI can surpass human reasoning in certain spheres, in that it can present better objective outcomes and humans don’t necessarily understand why. However, we may find that using humans to make decisions that they can explain may be more legitimate. For example, AI already screens job and credit applicants. Should we allow it to make legal judgments? Children will grow up with AI babysitters, best friends, and tutors, and they may come to prefer them to other real humans. What are the implications of that? There will be a divide between people who control and understand AI and those who don’t, between AI natives and the oldsters who don’t. Some people may violently reject AI, and others may come to more or less worship it.
So much money and so many people are doing this that it is becoming unstoppable. Therefore, the authors contend that governments need to establish organizations with representatives from academia and industry to talk about how fast and far we want to change, to establish ground rules. At the same time, different countries will probably make different decisions and so there will be a series of social experiments in which everyone needs to learn from the best practices of the others and the price for falling behind or making wrong decisions may be very high.
I’m not an expert on any of this and I think that the book is good as far as it goes. But it spends time talking about how our perception of knowledge and what it means to be human has changed from the Middle Ages and I think it would have been better to use the time to go into more detail about exactly how A.I. works. They don’t need to tell us the algorithms, but more detail on the kinds of algorithms that are used, or more examples of successes and failures would have been useful. The book often reads like it was written by a committee, which it sort of was. I think the three of them could have come up with something more.