From the coauthor of Algorithms to Live By, an exploration of the quest to use mathematics to describe the ways we think, from its origins three hundred years ago to the ideas behind modern AI systems and the ways in which they still differ from human minds
Everyone has a basic understanding of how the physical world works. We learn about physics and chemistry in school, letting us explain the world around us in terms of concepts like force, acceleration, and gravity—the Laws of Nature. But we don’t have the same fluency with concepts needed to understand the world inside us—the Laws of Thought. While the story of how mathematics has been used to reveal the mysteries of the universe is familiar, the story of how it has been used to study the mind is not.
There is no one better to tell that story than Tom Griffiths, the head of Princeton’s AI Lab and a renowned expert in the field of cognitive science. In this groundbreaking book, he explains the three major approaches to formalizing thought—rules and symbols, neural networks, and probability and statistics—introducing each idea through the stories of the people behind it. As informed conversations about thought, language, and learning become ever more pressing in the age of AI, The Laws of Thought is an essential listen for anyone interested in the future of technology.
Tom Griffiths did again! Wrote a book on a complicated messy topic in crystal clear captivating language so that you just can not put the book down until you're done. Bravo!
There is a book for everyone. This one was for me. A book review is a lens into a person's values/likes. I learned more about team members from their book preferences, and what they discussed, than most any other avenue.
The book is really about the evolution of thought on how the brain might work. I was familiar with most every concept, but enjoyed learning about the relationship between all the scientist/psychologist/mathematicians involved. Over the last 300 years, some ideas have remained popular, others have died, and yes people have wasted time being tested on ideas that turned out to be suspect.
I am just amazed at what we don't know. We know why planets go around the sun, why atoms attract, and can model these with enough fidelity to make accurate predictions, yet it seems with all our wisdom, the brain just might not be capable of being understood by the brain.
Many of the last chapters were on the current state of AI and how they certainly pass the Turing test and can mimic most human cognitive capabilities. But does mimicking mean neural nets are how the brain works? Jets can fly but tell us nothing about a bird.
The tension is that neural nets take so much information to train. The latest versions have pretty much read everything written. So perhaps this is not how the brain works. I have 2 theories. A) One second of vision has more data than an entire book of words. Watch mom talk for a few seconds, and you can learn much more than if you just wrote down her words. We need to train on vision, sound and touch. Lots of physics can be learned by watching a cup milk spill. Oh and human behavior. B) Neural nets start from scratch, with zero knowledge, only the ability to learn. Evolution over the last few billion years has certainly played a part in our current capabilities. So when you hear, oh this model took 2 weeks and x zillion dollars, well that compares to the last few billion year.
The best thing about the book is the books it referenced. The author has a text book, that looks amazing just a less pop science level.
Favorite quote, Geoffrey Hinton is one of the fathers of modern AI.
Young Geoffrey was sent to a British public school with a strong Christian ethos, which contrasted strongly with the ideology at home. He ultimately found this experience useful: “I think that was a very good preparation for being a scientist because I got used to the idea that at least half the people are completely wrong.”
I enjoyed this book. We go way back in ancient history and forward to modern day in order to build upon different ideas in order to describe and mathematically solve how the human brain works. From coming up with simple logic puzzles, to and/or statements, to if-then statements, to trying to organize speech, to large language models, the quest continues.
Far too basic, and even then not eminently readable like his first book on algorithms. It seemed somewhat of a "curve fitting" to refer those laws in the framework trifecta between logic, probability and neural networks. For the uninitiated, it gives a somewhat superficial walkthrough of the key events, or what the author believes to be key events, like Chomsky's work.
Weirdly enough, I totally loved this. It's a mix of history & cognitive science 101. It certainly required my whole attention--probably the first time I've listened to something on 1x speed in years (listening to someone talk about math requires incredible amounts of brainpower from me).
As someone who (generally speaking) has never understood how math can be used to represent anything other than literal quantities, this felt like an introduction to a new world. For the very first time, I felt like I understood why logic and probability theory matter, how math can be used to represent both, how computers work, and how this all is essential in understanding the nuts & bolts of what is really going on inside of a neural network. It does have some math, yes, but overall I found the book quite accessible (and funny), while still engaging with the mechanics on a deeper level than I've been able to understand in the past.
It also led to a lot of really fun conversations with my partner (who studied cognitive science, thinks about just about everything in terms of probabilities, and often tries to explain his thoughts by asking me to "imagine them as embeddings within a multidimensional space"). He was extremely excited to talk to me about the various concepts within the book, and also a convenient source of explanations & further drawing out of concepts I struggled with (namely, what is actually happening during deep learning from a basic math perspective without getting into calculus). I finishing the book feeling like I'd learned a lot--and a lot that was practically important for making wise decisions about the use of AI in my own life.
I appreciated how it felt like a window into a way of thinking about & approaching the world which feels really foreign to me (but very familiar to my partner). Which was, frankly, really cool (despite feeling like quite the stretching experience for me). There are so many different ways of being in the world & experiencing life, it was really different to think about questions that frankly have never interested me before, like "what is thought?" I think in addition to helping me just understand our increasingly AI-dependent world, this book also helped me to better understand the people who are creating this technology.
Overall, I highly recommend, especially for those of us entirely alien to the AI world (and to computer science generally) and who keep asking, "Okay, but what is AI really" and never receive a satisfactory answer.
Welcome to an enlightening and quirky adventure through the history of logic using mathematics, linguistics, and pattern evolution to understand the human mind. This book is not just a dry exploration of complex theories, but a captivating narrative filled with personal biographies and insights born of the minds of some brilliant psychological heroes.
Griffiths masterfully ties together the heavy logic of foundational concepts like rules and symbols, neural networks, and probability and statistics, with the human stories behind their discoveries. The result is a tale that is as engaging as it is informative. It's like having a cup of coffee with a brilliant professor, who not only shares their latest research but also regales us with tales of the eccentric geniuses who came before them.
For those of us who may have struggled to grasp the intricacies of cognitive science and AI, "The Laws of Thought" is a breath of fresh air. Griffiths' ability to make these complex concepts accessible and relatable is truly commendable; however, I would truly recommend accessing the pdf or viewing some of Griffiths’ other written work as some of the topics are hard to grasp from the audio alone.
As we navigate the ever-evolving landscape of technology and artificial intelligence, it's essential to have a solid understanding of the foundational principles that underpin these fields. "The Laws of Thought" is an indispensable resource for anyone interested in the future of technology and the human mind. So, grab a copy, settle in, and prepare to be both entertained and enlightened!
Personally, I was surprised to find that the names of the mid-20th century psychologist-- that I’ve studied in classes for years-- belonged to men who held conferences together, influencing not only each other’s work, but also the field of AI. Weaving these tales together has shed new light on the entire field. If that isn’t the sign of a good book and time well spent, I’m not sure what is.
Thank you to NetGalley, Macmillan Audio, and Tom Griffiths for an ALC of this book.
Can we use the laws of mathematics to describe the many ways human beings think? The author of this new book definitely believes so. Griffiths is a professor at Princeton focusing on information technology. His research is on interdisciplinary questions at the intersection of psychology and computer science. He received real national attention with his previous book, Algorithms to Live By: the Computer Science of Human Decisions, published by Henry Holt & Company in 2016. Griffith’s research explores connections between human and machine learning, using ideas from statistics and artificial intelligence to understand how people solve the challenging human questions and problems they encounter in everyday life. He favors introducing ideas from computer science and cognitive science to wider general audiences. In this new book, Griffiths identifies three major approaches of rules and symbols, neural networks, and probability and statistics. The idea of “the laws of thought” began with the 19th century philosopher and mathematician, George Boole. Boole’s mathematical work in algebra and logic is essential to computer programming and helped lay a foundation for today’s information age. The very idea, however, can be better credited to the Greek philosopher Aristotle and his ideas underlying modern science and logic. As informed conversations about thought, language, and learning become ever more pressing in the age of artificial intelligence, The Laws of Thought is an essential resource for anyone interested in the future of technology. . The final chapter – appropriately titled “putting it all together” – is an excellent summary of the state of such applied efforts at understanding human decision making. Highly recommended.
I received a free copy of this book through Goodreads Giveaways. Thank you!
I really enjoyed reading this book! Griffiths discusses three areas of exploration and development that all work together to form the current mathematical models of our brain that are also used in current AI systems. For each line of inquiry, he first talks about the history of and teaches the mathematics behind that idea. Then he talks about its shortcomings and how another idea had to come along to improve on it. Finally, he connects all this to where we currently are with understanding the mind and applying this understanding to AI. Griffiths does a good job of breaking down complex topics to make them easy to understand and interesting to read about. My math and science background helped here, but I think any reader could work through it, too. I loved the approach of talking about the history of all these developments and how they spawned new ideas or contrasting ideas that strengthened our understanding. It really forms a compelling picture of how science works, where you have new ideas, test them out, find their flaws, and try to improve each time.
In The Laws of Thought, author Tom Griffiths explores the central question: how do we measure and model the mind mathematically, not only to understand human thinking but also to recreate it through artificial intelligence.
This fascinating intellectual history begins with Aristotle, Leibniz, and Boole, guiding readers through the foundations of logic, formal systems, language, and much more, exploring how we organize thought in ways that can be analyzed and predicted.
Along the way, Griffiths adds context by sharing short biographies of the key figures mentioned; where they came from, how they got interested in the mind, and how their ideas developed.
A compelling blend of cognitive science, language and history of artificial intelligence, Highly recommended.
Thank you to NetGalley and Macmillan Audio for the advanced listening copy.
I listened to the audiobook and really enjoyed it, though it is definitely dense, intellectually packed, and layered.
My former economics brain appreciated how rigorously the author connected math, logic, and human decision-making. The historical threads from early logic to modern AI were fascinating, and I loved how ambitious the scope is. It feels like a serious intellectual workout in the best way.
That said, this is not a casual listen. I had to rewind a few sections to fully grasp the arguments, especially in audio format. It is rewarding, but you need to stay engaged. I think I would have appreciated it a bit more had I read a physical copy.
If you enjoy big ideas about how humans and machines think and do not mind something substantial, this book is worth picking up.
Wasn’t sure what to make of this book going in, but I ended up completely fascinated by not only the topic, but by how it’s presented. It’s topical, especially due to the inclusion of AI, and it’s given me a lot to consider in terms of the history of thought and philosophy and progress that’s been made in these realms. Technology isn’t generally something I seek to learn about (I usually find it overwhelming), but here it’s made relevant and not just discussed for the sake of it. I appreciated the anecdotal and personal accounts woven into this book, as they help ground things in the everyday. Still had to read the book slowly to digest what was being conveyed, but I’m glad I did! A solid paperback ARC to have won in a Goodreads Giveaway!
4.5 stars. Really fascinating book about the different computational approaches to understand the mind over the years.
Griffiths covers symbolic systems, neural networks, and Bayesian approaches and the academic debates that shaped them. One part that stood out was the notion that human brains can learn from so few examples, compared with LLMs, is because evolution gives us strong priors about how the world works. In that sense, LLM training, which starts with truly random weights, has to account both for a person’s “learning” before they were born as well as in their childhood.
I’m excited to see how the field continues to evolve!
Perfectly sums up the history of cognitive science and the strive to uncover a computational, algorithmic and implementational mathematics-driven model of the human mind. I thoroughly enjoyed the transitions and progression between concepts and theories prevalent in the early 20th century and the AI tools raging in the modern world. Would recommend basic mathematics to understand the book in greater detail; fast-paced and amazing read nonetheless.
it was a good account of the history of logical thought. I enjoyed the comparisons and contrasts between human and machine intelligence including the roles of natural selection in priming human brains for language and the impact that has on the required training for language models. the explanation of hidden Markov models gave me an aha moment as to how llms work.
although i expected this book to introduce new material, it ended up being more of a historical synopsis of interactions between cog sci and AI, which was enjoyable in its own right