'A terrific book - essential reading for everyone seeking to make sense of Artificial Intelligence' Professor Sir Adrian Smith, Director and Chief Executive of the Alan Turing Institute
In this myth-busting guide to AI past and present, one of the world's leading researchers shows why our fears for the future are misplaced.
The ultimate dream of AI is to build machines that are like conscious and self-aware. While this remains a remote possibility, rapid progress in AI is already transforming our world. Yet the public debate is still largely centred on unlikely prospects, from sentient machines to dystopian robot takeovers.
In this lively and clear-headed guide, Michael Wooldridge challenges the prevailing narrative, revealing how the hype distracts us from both the more immediate risks that this technology poses - from algorithmic bias to fake news - and the true life-changing potential of the field . The Road to Conscious Machines elucidates the discoveries of AI's greatest pioneers from Alan Turing to Demis Hassabis, and what today's researchers actually think and do.
'Nobody understands the past, the present, the promise and the peril of this new technology better than Michael Wooldridge. The definitive account ' Matt Ridley , author of The Rational Optimist
'Effortlessly readable. The perfect guide to the history and future of AI' Tom Chivers, author of The AI Does Not Hate You
Ironically or not, the best way to understand science is often through history. As Wooldridge relates early on, he planned this book as 'the story of AI through failed ideas' - and in this he succeeds brilliantly in showing the fascinating and compelling slog towards Artificial Intelligence.
This is an outstanding read. It is passionate about the technology, yet sceptical about its achievements. Wooldridge - Head of the Computer Science department at Oxford University - is humane in his judgements, yet clear and logical in his assessments. There's enough to get your teeth into, but little to scare away the general reader. He mainly talks technology (its logic and approach rather than specific kit), but engages some of the more interesting and important social and philosophical problems.
The benefit of a history of AI is it lets you see two things more future-oriented works can obscure: First is how far we are from Terminator type scenarios and how unlikely it will be that they emerge in our lifetimes. Wooldridge pushes a very cautious message about modern AI, noting how waves of faddish optimism have nearly killed the field twice before, and thus greater caution is required today. This is an academic who is surveying his field, highlighting its strengths, while issuing a warning about how it is being received/understood publicly.
Second and true to his word, Wooldridge proves a historical narrative of how idea after idea has failed in AI. Yet along the way he shows that we've learned just a little bit more, found dead ends, debated grand ideas, and slowly developed quite remarkable contemporary systems. This is quite compelling. As someone who works in an academic field which does not evolve in anything like the way the physical sciences do, I am modestly envious of the sense of progression and development which is charted. It's empirically closer to Kuhn than fables about a linear searches for 'The Truth', but there's still enough of a clear message of learning and improvement that leaves you admiring the people involved and the brilliance of humanity in general.
I've read a lot of books on AI lately for a journal article and this is by far the best general introduction and analysis of the issues. Other books (such as The AI Delusion by Gary Smith or The Globotics Upheaveal by Richard Baldwin) may capture certain areas in more depth, but by giving us the history and remarkably clear descriptions of how these systems work, it offer both caution and encouragement.
What a great read! Aimed at the educated layman, this history of AI and machine learning is a must read for anyone interested in the field. Woolridge takes you through the developments, focusing both on the failures and the successes, with just enough science to make it understandable and further explanations in the appendices for those not suffering an irrational fear of algorithms. Big five stars for this stimulating history and the discussion about future fears and possibilities.
A very fluent introduction to the historical background and basic concepts of artificial intelligence written by a computer scientist from the University of Oxford. Usually these popular science introductions are populated with lots of anecdotes and metaphors to get the message across, but this wasn't Wooldridge's approach. He does not shy away from the technicalities but still manages to keep it simple enough for the beginners and those who want to have a general idea about the concept. The book might feel a bit claustrophobic -but still not boring- at times since it stays more on the technical side and does away with the storification or the trivia.
In Chapters 7 and 8, he discusses the trendy misconceptions about the future of AI and does a very well job in highlighting the actual concerns we should have about the AI.
I didn't expect to read this so quickly, but I super enjoyed it. I love Wooldridge's style. He seems like a genuine man who honestly wants to share an understanding of what AI actually is and where it is actually headed in the future. It's never patronising and never hollow in it's explanations (such as mentioning a programming technique and then never explaining what that means, or giving an explanation that seems like I've been cheated of information because it's too hard" for my understanding). Always excellent diagrams and breakdowns that somehow weren't dry and boring. Wow, well done.
I enjoyed reading the chronological history of AI through its highlights and failures. While it evidently cut out a lot of history, it also felt like a fantastic entry point, with descriptions that really WANT to convey the significance of each landmark to a layman.
The subsequent chapters on philosophy, socio-economic factors, consciousness, and real-world current progress - this was overall just fascinating. Despite the overall message "we will probably never achieve conscious machines sorry but really we won't - here is why" it was very positive on what can be achieved and offered a new perspective on what we SHOULD be worried about.
A brilliant introduction to the history of AI, told in engaging stories which are highly accessible (with appendices in the back for those who want to dig deeper). I found myself recounting bits and bobs of this book to friends & family, helping to make sense in my mind of what AI is grappling with in its quest to be useful, intelligent, and maybe even conscious. The background and analogues to evolutionary biology, psychology, childhood education, and other fields were absolutely fascinating. A must read for those who don't know where to begin with AI. I hope Wooldridge writes a supplemental chapter or two or three in 2030 (this was first published in 2020).
Such a great book! Beginning with the definition of AI, then introducing the history of it, Michael concluded where we are going in AI at the end. Even though his answer to “Would these machines really have a mind, consciousness, self-awareness” is “we just can’t answer this question”. Well, better than an ignorant and confident but wrong answer. This book has made many academic terminologies easy to understand, which is important for readers to really get into AI. Starting with Turing, ending with Turing, not to mention the rigorous logic used in the middle, Michael has made the whole reading journey inspiring and enjoyable. • Perceive-reason-action loop—很像人的行为模式,五官对世界的接触就是在收集信息,信息传达到大脑后,大脑对身体发出指令进行行为。最终的结果是行为。看到陆琪的近期发言也是用的这个模型,很有意思。 • Ethical AI—人类都没有解决的ethical问题想在AI上面解决?人类都没有达成统一的ethical标准想要AI上统一?但是很有意思看到作者介绍的如何形成了一个Male AI。 • 以后多提升自己工作中:mental creativity, social skills, and rich degrees of perception and manual dexterity的部分。 • 现在的AI状态很像是建立起来了人类的肉体部分,但是骨骼框架没有搭建好。不过不知道神经网络中那些我们人类无法解释的权重,是不是AI在用它自己的方式(虽然我们不懂),去建立它自己的不同的骨架。注意,这里是不同。 • Hinton找到了自己在研究中的信仰,度过了AI寒冬,耐得住寂寞,也经得起AI的狂热。我的研究信仰,可以热爱一生的研究信仰是什么呢?!
if i could’ve written a book, it would be this. this is SO well explained & an easy read. i ended up not putting it down really, reading it in the quiet periods at work. everything in it is so relevant too, even 4 years later. plus i found myself taking about 30 pictures of pages which i’d use to write an essay.
I particularly enjoyed this excellent book on the history, state of the art and future of Artificial Intellegience. As an engineering student I had a brief introuduction into techniques like Neural Networks back in the early 90s but the techniques were horribly limited by the lack of computing power available at the time.
So it was interesting to read that may of the basic techniques aren't that much changed nowadays but it is just that the scale of the computation that has lead to the differences. It was also fascinating to see how the term AI has evolved over the years.
Well worth a read if you have the slightest interest in the topic
For someone with a mathematical background, I found the first half of the book fairly tedious, although this is no fault of the author. Once the book moves from historical progress to future development and hypothetical scenarios, it truly start to shine.
The author has a refreshingly pragmatic view on how AI will affect our lives, giving the reader plenty of pauses do thought.
So, this book was FUN. Actually, this book BECAME fun towards the end, chapter 7 onwards to be more specific.
Let’s start with why I did not enjoy the book as much as I expected to. The book aims to do multiple things all at once, it wants to provide a general introduction to AI techniques, it wants to provide an update on intelligent machines, it wants to chronicle the development of AI, it also wants to give us an update on where, who and what AI is up to AND finally, it wants to provide a conclusive argument on the Terminator scenario. It actually manages to do all of that, just at the expense of the overall reading experience. It really was a little scattered and I found my self just trying to put some of the interesting parts in a more meaningful way at the back of the book so I had a ‘Meet the Robots’ section and a section on Techniques that I found interesting. So I did end up with some valuable new insight, it was just a little taxing in that sense.
Other than the overall flow, the tone and marvellous ‘British Sarcasm’ of the author echos beautifully. It really was filled with humour and unexpected side notes that just made it an overall highly engaging read. The book really flourishes however towards the mid-end from chapter 7 onwards. The discussions on philosophy, the singularity, the impact of Marxism and the socialist utopia, the ‘Trolley Problem’ and ethical AI, the ‘Chinese Room Scenario’ and Deep Fakes was an absolute joy to read. The author really communicated complex dimensions of consciousness and a realistic AI driven future with solid convincing arguments although he shattered my heart in ‘The Singularity is Bullshit’ section. Nonetheless, it really was just a joy going through these discussions with the author.
There was one particular theme that really bonded the book together and that was the ‘Allan Turing Test’. So, do we need machines that are actually conscious or is the illusion of consciousness enough, we as humans might only be exhibiting an illusion of these traits ourselves in any case. This really calls for some further contemplation. Basically, the whole narrative felt like a heartfelt homage to Allan Turing which I thought was beautiful.
Finally, I DO recommend it, maybe start with the end if you are short on time and I would also add that ‘In Our Own Image’ which kind of set the bar for AI books for me personally is a must if you are interested in the topic.
Any debate involving the topic of Artificial Intelligence (AI), in contemporaneous times, more likely than not leads to an intensely splintered outcome. On one side of the debate, stand avowed optimists going to great lengths to extoll the Panglossian prospects of AI. Noted proponents of this notion, such as American computer scientist and futurist Ray Kurzweil, for example even dwell at length, and in all seriousness, about concepts such as Singularity, the advent of which would blur the distinction between man and machine in terms of intellect and attendant faculties. At the other end of the continuum hold forth the very Cassandras of doom. Warning about indiscriminate belief in AI and expending monetary and human capital in an untrammeled fashion in AI research, these pessimists fear about the time when machines would take over mankind and reduce humans to mere lab rats. The author James Barrat, the former theoretical physicist Stephen Hawking and one of the world’s richest individuals, Elon Musk form part of this latter brigade.
So, what exactly is the future of humanity vis-à-vis AI? Are we careening towards a Terminator-like scenario where a Skynet in future would send us into oblivion? Or are these fears an unfortunate figment of an overworked imagination running riot? Noted computer scientist Michael Wooldridge attempts to hack away at the cobwebs of confusion and provide a balanced and nuanced perspective on the theme of AI in his extremely accessible book, The Road to Conscious Machines.
In a painstaking yet, compelling fashion, Wooldridge traces the trajectory of the domain of AI, guiding his readers through the peaks and troughs of developments in AI, AI winters (a period characterised by drought in the funding for AI related research and a lack of confident innovations) and golden ages, before finally concluding in a quasi-philosophical manner about the terrifying prospects of machines lording over men.
At the heart of AI research, the name of Alan Turing stands out like a beacon of hope and ingenuity. This brilliant mathematician, whose life was as tragic as it was productive – he was found dead in his bed after suspected of consuming cyanide following a conviction in March 1952 of “gross indecency”, that is to say, homosexuality, and a 12 month sentencing for hormone “therapy” that would result in chemical castration – was a standout genius who worked at Bletchley Park and assisted in decoding the German Enigma encryptions, thereby paving the way for an Allied victory in World War II. Alan Turing also took it upon himself to solve the Entscheidungsproblem (“decision problem”) posed by the German mathematician David Hilbert. This problem asked whether every question in mathematics can be “decided” – solved with a “yes” or “no” answer. Turing employed theoretical computers, to demonstrate there existed problems for which calculation alone could not provide a solution.
The first public conference on AI was held in 1955 at Dartmouth. Pioneered by John McCarthy (the man who also coined the word Artificial Intelligence), the delegate list included future Nobel laureate John Nash and soon to be stalwarts in the sphere of AI such as Alan Newell, Marvin Minsky, and Herb Simon.
The period between 1956-74 is commonly referred to as the Golden Age of AI. The first serious attempt to build a robot led to the unveiling of SHAKEY, a robot capable of perceiving its environment, understand where it was and what was around it, receive tasks from users, plan means to execute that the concerned tasks before finally proceeding to complete them. However, the Golden Age came to an unfortunate end following the publication of the Lighthill Report. Lucasian Professor of Mathematics at Cambridge University, Joseph Lighthill penned a report expressing disdain for mainstream AI thereby leading to a turning off the funding spigot.
AI made a resurgent comeback a couple of decades following the above ‘Winter.’ IBM’s supercomputer Deep Blue defeated the then reigning Chess world champion Garry Kasparov in 1997. The world of AI attained dizzying proportions following the acquisition of a London based AI firm called DeepMind by Google in 2014. A start-up founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in September 2010, DeepMind copied the way neurons communicate in the brain, in virtual structures called “neural nets. DeepMind has performed some amazing tasks such as recognising images and game-playing. In March 2016, AlphaGo a DeepMind programme beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with a score of 4–1 in a five-game match. In 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to zero. AlphaGo Zero’s strategies were self-taught. AlphaGo Zero was able to beat its predecessor after just three days with less processing power than AlphaGo; in comparison, the original AlphaGo needed months to learn how to play.
But as Wooldridge explains, even the most sophisticated of these systems remain many orders of magnitude less complex than a human brain. As Wooldridge illustrates, life hides manifold complexities that put to shame the intricacies present in a 19×19 Go grid. While neural networks may have heralded the promise of self-learning systems there is still no comparing artificial neural nets to the structure of the brain.
Mankind – yet – does not have a theory of the mind that would conclusively claim to dissect and decipher the working of the mind. Mind cannot be the outcome of a loose agglomeration or coalescing of handy reductionist theories. This one fact alone is sufficient to provide reassurance that the gloom and doom Terminator scenarios continue to remain urban legends.
Hence, we would do better to address instead some real perniciousness that are unintended consequences of AI – displacement of jobs and dislocations of workforce, deepfakes, sock puppets and a whole swirly assortment of technology induced dangerous propaganda machines.
Rather disappointed by this one. Given him being the head of CS at Oxford, I had high expectations that Wooldridge would bring sufficient depth into the discussions about AI. The book had little. The first half was a recount of some significant milestones in the history of AI development. Rather than revealing the details with regards to AI techniques, they serve more as an ice breaker for introduction to basic CS concepts. Anyone having basic understanding of programming can safely skip the entirety of Part I.
Part II and III were more interesting as Wooldridge provided his opinions about the current and future conditions of AI, the myths, the promises, and the dangers. However, these were not explored in depth, either. The sections were more like individual essays as they lack coherence.
This book won’t provide you with much new, unless you’ve never read anything on AI other than news titles.
I found this book to be a good summary of the history of AI. It is closely linked into the history of computing from the early days of Alan Turing to the present day. Michael has a broad knowledge of many aspects of AI, robotics, machine learning and he is generous with sharing his knowledge.
At the end of the book he also dives into the ethics debate and into the future of AI - should we be worried. Well there are a lot of things to worry about in the future but not necessarily in the populist media thinking about AI.
A clear, level-headed, and easy to read introduction to the current (2020) state of artificial intelligence. I particularly liked the historical section on "how did we get here" which was very well written as the author has lived and contributed through much of this history, and the final section on "conscious machines?". The further reading section was very helpful in deciding where to go next in this field.
A brief history of AI and where is AI moving towards. The beginning of AI journey can be traced back to 1930s when Turing invented the Turing machine, which is actually a mathematical model to solve decidable problems - problems which with a certain recipe and given sufficient amount of time can be determined to be true or false. That formed the foundation of computer science, which with the development of more and more powerful computing hardware, led to breathtaking developments we are seeing today.
At the root of it all, in the early years of AI, the ‘intelligence’ is merely following a set of preset instructions (algorithms) to solve specific problems that are isolated from the real world. As such, despite the early wonderment, AI failed to live up to the expectations given the lack of practical applications. With that, the ‘golden age’ of AI fizzled out into the background.
The next wave of AI development then hypothesised that knowledge is power. In a ground breaking MYCIN project, scientists equipped the computer system with hundreds of rules about blood disease. The result showed that the computer’s ability to identify blood disease was comparable and if not better than expert human judgment. 1-0 to the bots! Not so fast - in another even more ambitious effort, in the Cyc project, the computer is loaded with as much knowledge based rules as possible to see if this brute force method can miraculously lead to true AI. What the project uncovered was how computer generally deals well with logic based arguments e.g. ‘The pope is human. All humans are mortal. Hence the pope is mortal’. However, the computer is not very good at managing contradictions, which requires common sense.
The AI evolution then took a turn to what we are today familiar with, the agent based AI. In this application, the computer takes on the perceive, reason, act loop. In this, an input is processed through prescribed logic or logic hierarchy to spit out an output. A simple example, in a Word program, tapping the keyboard will result in a corresponding output on the screen. The agent based AI is then equipped with Bayesian reasoning, continuously updating outputs based on new inputs. The crowning victory of this era of AI was defeating chess grandmaster Garry Kasparov by the Deep Blue machine.
AI’s next breakthrough was the development of machine learning. The fundamental for machine learning is that it needs lots and lots of data. A simple machine learning methodology is what is called the supervised learning where the machine is being fed with already data that are already true. However, this method requires huge amount of resource and is no where near what we consider general intelligence. Another learning method arose called the reinforcement learning where the computer is given huge amount of data to achieve a certain outcome. In this method, positive or negative feedback will have to be constantly fed back to form correlations or reinforcements for future iterations. This method of learning very much mirrors how our brains work where, through experience, the synapses in our brains adjust the thresholds in which electrical signals are fired when certain sensory inputs are received. The world woke up to this amazing development when the Deep Mind program AlphaGo beat world champion Lee Sedol at the complex game of Go through machine learning and playing against itself. The possibilities seem endless.
So does that mean we are close to cracking the intelligence code? While the development of AI has been amazing and scary, we are still a long way from achieving human intelligence (whatever the definition of human intelligence is). The computer system still pales in comparison to our neural network, especially when it comes to the complexity and the layering of our neural network. Furthermore, machine learning thrives on data that can be logically reasoned in a context of a decidable problem. However, there are real world practical problems that are undecidable and that is where computers fail.
Nevertheless, development of AI will continue to bring advancement in human lives. As the MYCIN project demonstrated, there are tasks that computers can do better than humans. Automation in healthcare and driving will be really exciting in the near future. As computing power continues to increase with the rise of quantum computing, more and more complex problems can be solved with AI.
How real is the threat of Singularity or Terminator-esque dystopian future? The author thinks that that is still far far remote possibility. And for a system to become self aware, it takes more than just hard core computing power and massive amount of data. The fundamental constraint of inability to process contradictions and to get itself out of a loop is something that still bugs even the best computers today.
I bought this book on a whim one day because I suddenly felt bad about not reading anything related to my field of work. I almost immediately regretted it because once I started looking into it the book seemed too simple and too oriented towards readers who are not from computer science, so initially I felt like I'd just be wasting my time reading this. Thankfully, I was wrong.
Michael Wooldridge is a Computer Science professor at Oxford; nevertheless, he's talented at explaining hard concepts in an easy to understand way without oversimplifying them. More importantly, he's very knowledgeable about the history of computer science and AI, which is not something they really focus on the computer science undergraduate classes (at least in mine!). So this book was a great read, because it goes very linearly from Alan Turing to Google buying DeepMind while explaining how each of the technologies that now form the very hyped field of AI came from each of these individual events.
While both Alan Turing and DeepMind are very well-known, this book introduced me to a lot of topics I had maybe heard about but didn't really know well, such as John McCarthy and the Dartmouth Conference, MYCIN, Block World, etc. I also didn't realize that neural networks were seen as an abandoned area of research until recent years, which made me rethink how I see some areas of computer science which are deemed as irrelevant today. So even as a graduate student in CS I ended up learning a lot!
This book also doesn't shy away from giving technical information, but it does so in a way that makes everything very accessible even to someone who has never studied anything in CS. An example is the Bayes Theorem: it's very necessary for understanding modern AI techniques, so the author brings it up and gives readers the basics to understand it, while pointing to an appendix in the end of the book for more details. The same is done with 'rules' (for rule-based techniques) and PROLOG. It also spends quite a while explaining neural networks, which is important for people to truly understand DL.
In conclusion, I really recommend this book. If you're not in CS/Informatics, this is the best introduction you can have to AI. If you are, you need to read this so you can recommend it to non-CS people - and also so you know how to explain things whenever someone asks you what you work with, or if you know the concepts of AI and all but not the history. I'm only giving it 4 stars because I found the discussion in the latter chapters of the book around the ethics/effects of AI on modern society a bit of a bore, though it is important. But this is a great read!
An important note: this book is a bit older so it does not contain information and does not consider newer techniques such as LLMs (ChatGPT and the likes) and GANs (Midjourney, etc). But nothing the book says is really invalid because of this; the book will just probably need some new chapters in future editions.
An interesting, (mostly) non-technical intro into the broad field of Artificial Intelligence, by looking at its historical developments and current situation in a few fields. While in the first half there are some mathematical examples (which is great), later it becomes less specific and more on the level of popular-science writing (although this does not diminish the immense expertise and experience of the author), and in the final chapter almost too philosophical.
I really enjoyed the author's common sense and grounded approach when talking about the possibilities for AI - his analysis of driverless cars and AI in healthcare is, from my perspective, quite in the middle between techno-optimism and pessimism. As a CS professor, he knows the limitations of the current technologies, but also the potential and the near-future developments.
It is an enjoyable book that can prime an AI-layman for looking at the various AI fields in a similar way as an expert might do. As the title says, its main focus is on the "consciousness of machines", which is a very different story than an inquiry into the current working of AI - and readers should be aware of that.
3.5/5 Yapay zeka ile ilgili derli toplu bir fikir edinmek için ideal bir metin. İnternette dolaşan saçma sapan yapay zeka fikirlerinin aksine, yapay zeka nedir, ne değildir öğrenmek isteyenler için spekülasyondan uzak mantıklı bir tarihi gelişim çizgisi dahilinde size bir anlatı oluşturuyor kitapta yazar. Yapay zeka nasıl başladı, nasıl gözden düştü ve günümüzde neden tekrar popüler oldu sorularına cevap buluyorsunuz. Biraz felsefe mühendislik iç içe geçmiş bir metin. anlatım dili güzel ama bazı yerlerde konuya uzak olanlar için ayrıntılı ve zorlayıcı olabilir. Bu alana hakim olanlar için ekstra büyük fikirler içermiyor kitap ama zaten öyle bir iddiası da yok. Başta da dediğim gibi daha çok fikir edinmek için okunabilir. Ben biraz daha felsefi boyutu da beklediğim için tam beklentimi karşıladı diyemem.
Michael Wooldridge's 'The Road to Conscious Machines' provides a detailed history of AI, from Alan Turing's ideas to modern advancements like DeepMind. The book delves into significant projects, thinkers, and the alternating "AI golden ages" and "AI winters." An intriguing section by Rodney A. Brooks titled "Intelligence without representation" offers profound insights. Wooldridge concludes by discussing the future of AI, distinguishing between exaggerated risks and genuine concerns. The book is a recommended read for both tech enthusiasts and professionals, offering clarity without excessive jargon.
The book is lucid, except for some part in between. It quite comprehensively covers many topics and is a easy read. I had read another book on AI (Artificial Intelligence - A Guide to Thinking Humans) just before this one. Both the books have quite overlapping content, but it helped me consolidate my viewpoint. Additionally, this book has some vital areas covered which the other one missed, rather surprisingly. It gets a bit technical in between for around 20 odd pages, but is worth the patience. Worth a read to understand current state of AI.
Highly recommended accessible book on AI. Covers the history of the field (in an interesting and insightful way), and where it is today. Highlights what we should not worry about, and what we should. Professor Wooldridge is one of the leaders in the field, and also writes very clearly and accessibly.
Honestly it's just as any other book about AI and its history. There is nothing new and no exciting concepts, and everything is described in a very basic way. so if you're already familiar with AI basic concepts, a bit of cognitive science and science in general then it is not worth your time. Sorry!!!
A different look at AI from the prespective of the catalogue of failures and it's inability to live up to the hype. Building from the Turing Test, the history of AI is explored and why it fell short is discussed. The concepts are clearly explained in layman's terms without being patronising and myths are dispelled.