The power of the ever-increasing tools and algorithms for prediction and their paradoxical effects on risk.
The Age of Prediction is about two powerful, and symbiotic, the rapid development and use of artificial intelligence and big data to enhance prediction, as well as the often paradoxical effects of these better predictions on our understanding of risk and the ways we live. Beginning with dramatic advances in quantitative investing and precision medicine, this book explores how predictive technology is quietly reshaping our world in fundamental ways, from crime fighting and warfare to monitoring individual health and elections.
As prediction grows more robust, it also alters the nature of the accompanying risk, setting up unintended and unexpected consequences. The Age of Prediction details how predictive certainties can bring about complacency or even an increase in risks—genomic analysis might lead to unhealthier lifestyles or a GPS might encourage less attentive driving. With greater predictability also comes a degree of mystery, and the authors ask how narrower risks might affect markets, insurance, or risk tolerance generally. Can we ever reduce risk to zero? Should we even try? This book lays an intriguing groundwork for answering these fundamental questions and maps out the latest tools and technologies that power these projections into the future, sometimes using novel, cross-disciplinary tools to map out cancer growth, people’s medical risks, and stock dynamics.
PLEASE When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.
Prediction is a hot AI topic these days; as Stephen Wolfram explains in his new book about ChatGPT, large language models work by iteratively predicting the next word in a sentence, paragraph, essay, design document, poem, or whatever. If based on enough training data (such as all the books ever published), serial predicting can add up to something legible, authoritative sounding, and maybe even useful (though the jury's still out on that).
This book goes into none of that. Instead, it argues that technological progress is giving us better and better predictions all the time, and that with this success comes reduction of all kinds of risk. The authors even consider the possibility of getting rid of risk altogether.
But a little thought squelches that thesis. Reducing risk here produces new risk there. Yesterday's blind spot becomes today's calamity. The authors give a nod to unintended consequences, but assume that the risks in AI's wake would somehow be "narrowed" by AI into ones we needn't worry about quite so much. That seems tenuous. AI's training data comes from our collective past experience. There's no reason yet to believe that humanity's collective blind spots (such as climate change as a possible extinction threat) won't be passed on to AI through the challenged training data we've fed it. Shouldn't it be obvious that all the knowledge and wisdom from the past must prove inadequate to predict the terrors of the future?
I didn’t enjoy this book at all, should have listened to the reviews. Nothing thought provoking, yes predictions are improving and sometimes benefit and sometimes don’t. I just have missed something as half way through I decided to give on the book.
I was super excited to start this book, and I enjoyed the initial chapters, but once I got further in it felt difficult to connect the authors' disparate thoughts.