- Recommended for tech history fans.
- Recommend Alan Turing fans.
- Recommended if you are confused about all the AI terminology: strong vs weak AI,
- symbolic AI vs neural networks, and Artificial General Intelligence (AGI)
- Recommended for math geeks interested in the NP-Complete problem
- Recommended for Expert System fans
- Recommended for iRobot fans
I love these kinds of books. For somebody like me, they’re catnip. You see, I’m a sequential learner and I can’t understand the current situation of a subject until I know how I got here. The author, Michael Wooldridge, fills that need for me nicely.
Woodbridge is an academic. At the time of this writing, he is the Ashall Professor of the Foundations of Artificial Intelligence at the University of Oxford. He’s written nine books and published almost 500 papers on artificial intelligence starting as far back as 1995. But this isn’t a deep dive technical book. You will not learn how to build neural networks here and I wasn’t looking for that anyway. I was looking for an overview of the field. I greatly appreciated Woodbridge’s writing style. It’s conversational; like sitting next to a charismatic stranger at a dinner party. The audio book was also easy to listen to while I was walking the dogs.
The book is nine chapters
1: Turing’s Electronic Brains
He begins with the father of artificial intelligence, my computer science hero, Alan Turing. When Turing committed suicide at the age of 41, he had solved not one, not two, but three of the world’s most difficult problems.
He led the efforts at Bletchley Park to break the German encryption system (Enigma) during WWII. He probably helped save some 20 million lives because of it.
He proved mathematically that computers were possible with a thought experiment called The Turing Machine. Today, every computer that has ever existed is a version of his Turing Machine. And, by the way, he proved that the Entscheidungsproblem is unsolvable. Yes, I spelled that correctly: the Entscheidungsproblem; a problem posed by David Hilbert and Wilhelm Ackermann back in 1928 on whether there was an algorithm that, when given any logical formula, can always answer "yes" or "no" as to whether the formula is valid or provable. Turing used his machine to prove there is no algorithm.
Finally, and most germane to this book, Turing devised the definitive test to judge if a machine is intelligent, the Turing Test (Not to be confused with the Turing Machine). Essentially, a machine and a human sit behind a screen. A judge sits in front and asks questions of both. If the judge can’t distinguish the machine from the human, then the machine is intelligent. If you want a way better explanation, check out the 2014 movie, “The Imitation Game” with Benedict Cumbnerbatch playing Turing.
Turing’s 1950 paper, “Computing Machinery and Intelligence: The Imitation Game” fueled research teams for years trying to pass the Turing Test. Unfortunately, many tried to “game” the test. They didn’t try to build an intelligent machine. They tried to fool the judge.
The most famous of these experiments was Joseph Weizenbaum’s program, ELIZA. It was one of the first computer programs to simulate human-like conversation using natural language processing, marking a milestone in human-computer interaction. If you’re interested, Norbert Landsteiner created an online version of it in 2005 (LINK). According to Wooldridge, “[ELIZA] was a serious and influential scientific experiment in its own right—but sadly, it has since become synonymous with superficial approaches to AI generally and the Turing test in particular.”
Wooldridge explains the difference between Weak and Strong AI. Weak AI programs demonstrate capability without any claim that they actually possess consciousness. A subset of those programs are called Narrow AI; programs that can carry out specific tasks. If you want an example, watch the 2025 Superman trailer when Superman’s medical robots say, “No need to thank us sir as we will not appreciate it. We have no consciousness whatsoever; merely automatons here to serve.” That’s WeakAI.
Strong AI has the goal of building programs that really do have understanding (consciousness). The important thing to remember though is that Strong AI research is largely irrelevant to contemporary AI research. When you think of Strong AI, think of the 1984 movie “The Terminator” when Skynet wakes up, becomes self-aware (we call that the singularity in the sci fi biz), and decides to wipe out the human race.
What Weak AI researchers are pursuing are machines that have general-purpose human-level intelligence; the ability to converse in natural language, solve problems, reason, perceive its environment, etc. at the same level as a typical person. This is what everybody refers to as Artificial General Intelligence (AGI). AGI usually isn’t concerned with issues such as consciousness or self-awareness, so AGI is a form of weak AI. In a 2025 podcast, Demis Hassabis (current DeepMind CEO), said that he expects to attain AGI some time in the early 2030s (five years away). He said that DeepMind has definitely passed the middle game and is heading towards the endgame.
According to Wooldridge, there have been two foundational strategies in AI’s development. The first, symbolic AI, seeks to model the mind by using symbols to represent concepts and actions. This approach dominated from the 1950s to the late 1980s and was prized for its clarity but limited by its rigidity.
The second, neural networks, takes inspiration from the brain’s structure, modeling artificial neurons to process information. Neural nets have driven much of the recent progress in the field. Wooldridge emphasizes the stark methodological differences between symbolic AI and neural nets, noting that both have cycled in and out of favor and have even sparked rivalry among researchers.
2: The Golden Age (1956 to 1974)
Wooldridge chronicles the early optimism and foundational moments of AI, beginning with John McCarthy’s coining of the term “artificial intelligence” in 1955. Key pioneers (McCarthy, Marvin Minsky, Allen Newell, and Herb Simon) established influential AI labs and set the field’s direction. Early AI focused on four main strategies: perception, machine learning, reasoning, and natural language understanding, producing legendary systems like SHRDLU and SHAKEY.
Search emerged as a key AI technique but researchers soon discovered its limitations, especially for search spaces that grow exponentially. This led to understanding the significance of decidability (can be solved by computer) and NP-complete problems.
NP-complete problems are any of a class of computational problems for which no efficient solution algorithm has been found. I was surprised to learn that this idea is a relatively recent mathematical discovery. Stephen Cook published his 1971 paper, “The complexity of theorem-proving procedures,” and identified the basic structure of NP Complete Problems. After, every problem that AI researchers were pursuing (like problem solving, game playing, planning, learning, and reasoning), fell into the NP-complete bucket. This led AI researchers to pursue Bayesian reasoning to get approximate answers and to the development of other heuristics like Satisfiable Problem Solvers (SAT solvers). SAT Solvers ask if there exists at least one combination of variable assignments that makes the entire logical expression evaluate to TRUE.
3: Knowledge Is Power
Wooldridge explains that John McCarthy established the paradigm of logic-based AI in his seminal 1958 paper “Programs with Common Sense.” An agent expresses its knowledge about the world using logical sentences and decides what to do by deducing which steps will help achieve its goals
By the 1970s, expert systems that use this Common Sense paradigm emerged like
MYCIN, DENDRAL, and R1/XCON. When I was in grad school in the late 1980s, I wrote my own expert system for my thesis. It was awful but it demonstrates the topic was on everybody’s mind (At least on the minds of my thesis advisors). These systems are built on rule-based logic, excell at solving narrow, specialized problems, sometimes even outperform human experts, and even some delivered significant commercial value.
One example is Doug Lenat’s ambitious Cyc project, which aimed to encode all human knowledge. It ultimately revealed the limits of knowledge-based AI and the difficulty of extracting expert knowledge from humans. By the end of the 1970s, progress in the AI field stalled, funding dried up, and the first “AI winter” set in. Skepticism prevailed as AI failed to deliver on its early promises.
4: Robots and Rationality
Rodney Brooks came to the rescue in the 1990s and 2000s by challenging the traditional, logic-heavy AI paradigm. He argued that intelligence is not just about abstract reasoning but is an emergent property of situated systems. His subsumption architecture supported the commercial Roomba robot vacuum by layering simple behaviors and enabling fast, reactive responses to the environment. For specific tasks, like vacuuming, his method proved effective but it couldn’t scale due to the complexity of managing many behaviors. The AI research field started to move toward agent-based AI making rational, utility-maximizing choices. The field turned toward Bayesian inference to deal with uncertainty.
Wooldridge says that by the mid-1990s, a consensus emerged that agents had to have three characteristics. First, they had to be reactive, attuned to their environment and able to adapt when changes occurred. Second, they had to be proactive in completing the task. Third, agents had to communicate with other agents when required. When I took a Waymo (Self-Driving Taxi) ride in San Francisco this year, the car ran into a traffic jam. A parked delivery truck blocked one lane on the road. The Waymo car in front of the delivery truck told my Waymo car that it was clear to go around, and so we did.
5: Deep Breakthroughs
In this chapter, Wooldridge details the rise, fall, and resurgence of neural networks. Frank Rosenblatt published the original idea back in 1957, “The Perceptron: A Perceiving and Recognizing Automaton.” This was the first single-layer neural network designed for binary classification tasks. Unfortunately, according to Dr. Kais Dukes, in 1969, Minsky and Papert published a book criticizing the single layer perceptron. Minsky, that same year, had just received the prestigious Turing Award for his contributions to AI so his voice had a lot of weight in the AI community. Even though Minsky and Papert advocate for a multi-layer network (Deep Learning) in the same book, some say that his criticism of the Perceptron contributed to the AI Winter mentioned before.
Neural networks saw renewed interest with the development of the backpropagation algorithm by Geoffrey Hinton, David Rumelhart, and Ronald Williams in 1986. That made it feasible to train multi-layered neural networks. But the approach fell out of favor again because of computational constraints and the lack of large datasets.
Neural nets started its latest resurgence back in 2009 when Dr. Fei-Fei Li brought Imagenet online; a large-scale, well-annotated image database to improve computer vision research (About 14 million images as of 2021). This coincided nicely with cloud computing’s storage capacity and on-demand compute power. There were no more constraints. Neural Networks became a key ingredient to Machine Learning alongside other heuristics like linear regression, decision trees, clustering, and classification.
6: AI Today
Wooldridge highlights several key milestones from the late 2000s to 2018 that illustrate the rapid progress and broad impact of AI. Remember, he wasn’t aware of ChatGPT yet.
In 2009, Google started their Self-Driving Car Project. By December 2016, leaders spun the project out to be an independent subsidiary under Alphabet, Google's parent company. In November 2017, Waymo rolled out their first fully autonomous taxi service in Phoenix, Arizona. Today (2025), they operate in five cities: Phoenix, San Francisco, Los Angeles, Austin and Silicon Valley.
In 2013, Deepmind (A British company) published "Playing Atari with Deep Reinforcement Learning" that described DeepMind's deep Q-network (DQN). It could learn to play several Atari 2600 games directly from raw pixel input, outperforming previous algorithms and even surpassing human experts on some games. Remembering that Wooldridge published the book a year before ChatGPT, he says that this achievement was the milestone that changed everything.
Two years later, DeepMind demonstrated AlphaGo. The system defeated Fan Hui, the reigning European Go champion, marking the first time an AI had beaten a professional Go player. The next year (March 2016), AlphaGo defeated Lee Sedol, one of the world's top Go players, a feat that was considered a decade ahead of its time and watched by over 200 million people globally.
In 2018, Nvidia’s StyleGAN (a generative adversarial network or GAN) enabled the creation of hyper-realistic, entirely fake images of people who do not exist. That same year, DeepMind introduced AlphaFold to accurately predict protein structures; an achievement that revolutionized biology by solving a problem that had stumped scientists for decades.
And just to put icing on the case, in April 2019, the Event Horizon Telescope collaboration unveiled the first-ever image of a black hole, a feat made possible by sophisticated AI algorithms that processed enormous amounts of astronomical data, marking a new era in astrophysics.
7: How We Imagine Things Might Go Wrong
Wooldridge describes how some researchers think AI could go wrong in the future referencing dystopian scenarios like “The Terminator” and the concept of the singularity. He’s a bit skeptical though, noting the immense computational power and data storage requirements required to achieve such an outcome as a self aware Skynet.
He refers to Nick Bostrom’s popular book “Superintelligence” and his meme-worthy “paperclip metaphor” to illustrate the dangers of poorly specified AI goals that lead to unintended and catastrophic consequences. It is a thought experiment where an AI is programmed with the task of maximizing the number of produced paperclips. It relentlessly pursues this goal by appropriating all available resources including those essential to human life. It eventually converts the entire Earth into paperclips.
I’ve always thought that If we’re worried that Skynet, or the Paperclip AI, is going to kill us all, we should just imbue future AIs with Isaac Asimov’s Three Laws of Robotics that he first published in a 1942 short story, "Runaround” and later included in his influential 1950 collection “I, Robot.” He designed the laws to prevent robots from harming humans, ensuring that they would follow human orders, and allow them to protect themselves, but always with human safety as the highest priority. But Wooldridge points out that in many of Asimov’s stories, ethical dilemmas crop up as the robots try to square the circle when it comes to the rules and what’s actually happening in the story. Maybe Asimov’s rules will not be the savior of us after all.
Woodbridge then discusses the classical philosophical questions surrounding the trolley problem; a famous ethical thought experiment that presents a moral dilemma involving a runaway trolley headed toward five people tied up on the tracks. You are standing next to a lever that can divert the trolley onto another track, where it would kill only one person. What do you do? Woodbridge concludes with the idea that if humans can’t handle these questions, why would we expect an AI to figure it out?
Lastly, Woodbridge covers the emergence of ethical AI guidelines. He points to the 2017 Asilomar AI Principles and Google’s 2018 AI Principles as progress towards marking significant efforts to ensure AI development aligns with human values and safety.
8: How Things Might Actually Go Wrong
Wooldridge then turns to what he thinks might actually go wrong in the short term; not long range doom and gloom planet killer scenarios, more like tactical short term problems that we need to solve. He highlights things most of us already know today, things like some jobs will become obsolete as AGIs become more and more useful. Further, How are we protecting ourselves from algorithmic bias, lack of diversity, and fake news? These problems are big enough without AI. Think about the trouble that might happen when AI Systems exponentially amplify them.
9: Conscious Machines?
In the last chapter, Wooldridge turns his attention to the possibility of creating a Strong AI, like Skynet. It wouldn’t be a planet killer per se but an AI that has crossed the singularity boundary of consciousness. He covers some of the key thoughts from distinguished philosophers (like Daniel Dennett, Thomas Nagel, John Searle, Roger Penrose, David Chalmers, and Robin Dunbar) about the meaning of consciousness. The famous futurist, Ray Kurzweil, who coined the word, singularity, in his 2005 book “The Singularity is Near: When Humans Transcend Biology,” says that machines will achieve it as early as 2045.
But all of that discussion is why Strong AI research is not part of the Weak AI community. Wooldridge’s opinion, and I agree with him, is that It really doesn’t matter when a computer program becomes self aware. From the AI research community’s perspective, it all comes back to the Turing Test. If it walks like a duck and talks like a duck, it’s a duck. If we interact with a system that seems intelligent, then, for all intents and purposes, it is whether it is self aware or not.
In other words, it all comes back to Turing. Of course it does.
My Final Takeaway
I thoroughly enjoyed this book and learned a lot. I will recommend it to people like me who need to understand the entire evolutionary story before we can truly understand the current situation. I especially liked the Alan Turing discussion in Chapter 1. I’m not recommending it for the Canon Hall of Fame. But it is a splendid niche book.