Machine learning is a continuously evolving field, and any book that promises to be on its cutting edge will surely go out of date fairly quickly. This edition was published in 2021, and promised to be an introduction to machine learning for the non-technical. As a "concise overview of machine learning", it was fairly successful. It did an excellent job setting up the importance of data and the evolution of computing, covering supervised, unsupervised, and reinforced machine learning. By starting with simple linear regressions, it provided some theoretical underpinning for the volume of concepts to follow.
A couple of specific issues I took with the book centered around the chapters on neural networks and the future of machine learning. On neural networks, the author gave the impression that we understand what neural networks do. The whole premise of a neural network is that we don't understand what's going on; rather, the computer continuously updates and "learns" patterns in the data that we may not be able to detect, or even patterns that seem obvious but turn out to be anything but. An example of this is a recent experiment that found out a neural network, when left to its own devices, developed a complex form of adding two numbers that was completely unintuitive to humans. The author later talks about how networks are a "black box" model in Chapter 7, but by this point the damage is done. The confidence around how networks work in Chapter 5 seems a bit out of place.
In the chapter on machine learning's future, the author seemed way out of his depth. Most of the chapter is speculation, oscillating between describing how much data we will continue to produce in the future and one-off guesses about what technologies will be important in the future, and (relatively) how far off those technologies are. What struck me were not the technologies present (but some were doozies - flying cars?), but those that were conspicuously absent.
The whole book does a wonderful job referencing different papers for the curious reader to look into if they are interested. These are all very helpful, but one of the most important papers of the last decade is conspicuously missing: "Attention Is All You Need". This paper, defining the transformer architecture, has transformed the field of generative AI, especially in the area of large language models (LLMs). Not bringing up LLMs, or generative AI at all, is a bit of a failure of this book that is understandable but disappointing. GPT-3 had already been released by this point, as had DALL-E. Though they were not nearly as widespread as they are now, a paragraph mentioning them would have been helpful, as a reader with no prior knowledge picking up the book today would be a bit lost.
Overall, the book was quite bland for one with an "MIT Press" label. Even though it states in the introduction that it was a non-technical book, I was hoping for more of the mathematics behind much of the content. It is tough to gain much understanding of machine learning without understanding some of the fundamental math behind it; otherwise, you're mostly learning terminology, ideas, and other helpful facts that may help you in your job/business, but don't let you truly understand what you're working with.
I was also quite disappointed at the absolute lack of historical context for most of the book. Chapter 1 was a notable exception, providing some sense of the history of computation. But where were the specifics? Where was the story of Target predicting a teen girl's pregnancy before her father even knew? What about the early history of machine learning and AI, struggling with deterministic models before moving to others? The idea that neural networks were largely disregarded until Hinton et. al. brought them back into popularity? Any history (or even a separate focus) on statistics, the backbone of machine learning? Instead, the book presents concepts and ideas, with vague examples and future cases, all while preaching about the importance of data.
Overall, I think this is a good read if you don't have much interest in machine learning but need to be aware that certain concepts exist (like clustering or classification), or want to understand the very basics of a neural network. If you have any interest at all in machine learning, I'd recommend staying away and focusing on other books specific to the concepts you want to learn. Learn statistics from doing your own projects or other statistics-specific sources. Understand the fundamentals of reinforced learning by reading the AlphaGo paper. Learn the history of machine learning from other Silicon Valley or science/math related history books from the 20th century (I'd recommend "The Dream Machine"). Learn about deep learning using Bengio's or Nielsen's textbook. Heck, even give the author's own deep learning textbook a crack. But probably don't read this book.