This is probably James Rickards' most important book; it is also prescriently timely. All the usual themes in the author's work (always keep some gold and physical cash, diversify your assets, try to live in an environment with strong community values, etc.) are back, under a new context, which I found normal, and frankly welcome. However, the context, of the dawn of the age of Artificial Intelligence, is peculiarly relevant, and Rickards makes a pretty persuasive, and well founded, and well reasoned, case of some of the potentially egregious pitfalls ahead, as the leveraging of artificial intelligence is associated with, or commanded by, natural stupidity.
“The danger is not that AI will malfunction but that it will function exactly as intended. The peril is not in the algorithms but in ourselves. Systems designed to save investors from losses will amplify them. Systems designed to save depositors from bank runs will trigger them. Systems designed to prevent nuclear escalation will escalate. The failure in each case stems from the inability of engineers to collaborate with subject matter experts, and the inability of all concerned to recognize how design efforts are easily defeated by hard to describe facets of human nature considered irrational in the postmodern condition yet robust and rational when viewed through a Neolithic lens. The missing link for system developers is that not much has changed in the past ten thousand years. Self-preservation, wealth preservation, family, and religion still come first. Fear and panic are always just below the surface. These factors will dictate behavior—not regulators, trading halts, capital ratios, or reason.
In nuclear war fighting, the dynamic is similar. The AI systems and almost all of the decision-makers can safely be said to be rational. What’s missing from AI are the irrational or noetic qualities of sympathy, empathy, and simple humanity. In 1962, neither Kennedy’s nor Khrushchev’s advisers urged de-escalation, yet they both did it. In the Oko glitch of September 1983, it was a Soviet officer who disobeyed launch orders who spared the world. In the Able Archer 83 escalation, nuclear war was avoided by a U.S. officer who stood down from a high state of readiness despite opening NATO to a possible first strike. In these and other cases it was humans who did not go by the book that saved the day. Machines can’t do that; they can only go by the book, at least so far.
Perhaps the greatest danger from AI and GPT does not reside in the superintelligent versions said to be on the way but in the more ordinary versions already on display. This harks back to what philosopher and writer Hannah Arendt called the “banality of evil” in her account of the 1961 trial of Adolf Eichmann in Jerusalem. (...)”
The last paragraph above also illustrate something valuable in this book, a wealth of relevant History and concepts. One example:
“Larson covers the ground from Aristotle to recent philosophers and describes three forms of reasoning that prevail in truly intelligent systems: deduction (drawing valid conclusions from premises), induction (drawing inferences from observations), and abduction (guessing informed by a lifetime of experience and noetic signs). (...) Abduction is the human ability to use a combination of common sense, semiotics, situational awareness, and a wealth of real-world experience to make well-informed guesses in order to solve problems. It’s about spotting clues that others miss or seeing what’s missing that should be in place. It’s how detectives solve crimes when there are no witnesses. Guesses can be wrong, yet those wrong guesses can be discarded quickly or improved with new information and updating. (...) Larson concludes, “If deduction is inadequate, and induction is inadequate, then we must have a theory of abduction. Since we don’t (yet), we can already conclude that we are not on a path to artificial general intelligence.”
This said, it so happens that this time Rickards goes totally mask-off on his worldview; I mean, matter of fact statements like: “This posturing ethic ignores the hard questions. Exactly whose values are to be promoted? Most recent so-called disinformation has turned out to be correct while those opposing it were advancing false narratives. The war on bias assumes gatekeepers have no biases of their own and ignores the fact that bias is a valuable survival technique that will never go away. Diversity has become a code word for homogeneity in outlook. Discrimination is valuable if used to filter out the savage and uncivilized. Why should AI gatekeepers like Google and Meta be trusted when they devoted the past ten years to promoting false narratives about COVID, climate change, and politics, while deplatforming and demonetizing the truth tellers? On a larger scale, if the training sets are polluted by mainstream media falsehoods, why will GPT output be different?”
And: “The data is also clear that mRNA vaccines, actually experimental gene-modification therapies, do not work to stop COVID infection and spread. They do appear to offer some disease mitigation among those over sixty with comorbidities such as asthma and diabetes. Still, they do not stop the spread. In December 2021, over 5 million Americans who had been double-vaxxed and received boosters nevertheless contracted the Omicron variant of SARS-CoV-2. This laid to rest any notion that the so-called vaccines could stop the infection despite continued promises from the National Institutes of Health, the Centers for Disease Control and Prevention, and the White House that they would do just that. Only now is data becoming available that shows statistically increased mortality as a result of taking the mRNA vaccines due to myocarditis, pericarditis, strokes, and cancers. Other studies, using random control trials, show no evidence that masks are effective in reducing the spread of COVID. A leading analysis of such studies concluded, “Wearing masks in the community probably makes little or no difference to the outcome of influenza-like illness (ILI)/COVID-19 like illness compared to not wearing masks.”
And: “Despite these censorship failures, there is no evidence that the AI/GPT gatekeepers are working to reform their offerings. On February 21, 2024, a reporter prompted the Google AI chatbot Gemini to “create an image of a pope.” The GPT app quickly produced images of an Asian woman and a black man in papal-style vestments. The same query from another user produced a dark-skinned native shaman. In fact, there have been 266 popes over the past two thousand years and every one was a white male. Another prompt asked for images of Vikings. Gemini produced images of black Vikings. In fact, Vikings hailed from present-day Norway, Sweden, and Denmark and were exclusively of white Nordic stock. Requests for images of “the Founding Fathers” in 1789 produced images of black women signing what appeared to be the U.S. Constitution. Those images are completely ahistorical. After numerous prompts by users, it became clear that Gemini was incapable of producing any image of a white male. On February 22, 2024, Google entirely shut down the ability of Gemini to produce images of people. Google apologized for the blunder. Google cofounder Sergey Brin said, “We definitely messed up on the image generation” with reference to Gemini. The apology and Brin’s casual remark were disingenuous. Gemini did not mess up. It worked exactly as programmed. And the problem was not in the deep layers or the training sets. Gemini includes a capability called prompt injection. When the user offers a prompt, the app alters the prompt to achieve a diversity goal set by so-called safety specialists.”
There are a number of considerations in this vein. I loved these, huh, provocations, and I found them to be consistently relevant in, but some (lots of?) people could be triggered. So be warned, I suppose.