Jump to ratings and reviews
Rate this book

Rebooting AI: Building Artificial Intelligence We Can Trust

Rate this book
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a robust artificial intelligence that can make our lives better.

“Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough.” —Garry Kasparov, former world chess champion and author of Deep Thinking

Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence.

The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust—in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better.

288 pages, Paperback

First published September 10, 2019

327 people are currently reading
3097 people want to read

About the author

Gary F. Marcus

15 books208 followers
Gary Marcus is an award-wining Professor of Psychology at New York University and director of the NYU Center for Child Language. He has written three books about the origins and nature of the human mind, including Kluge (2008, Houghton Mifflin/Faber), and The Birth of the Mind (Basic Books, 2004, translated into 6 languages). He is also the editor of The Norton Psychology Reader, and the author of numerous science publications in leading journals, such as Science, Nature, Cognition, and Psychological Science. He is also the editor of the Norton Psychology Reader and has frequently written articles for the general public, in forums such as Wired, Discover, The Wall Street Journal, and the New York Times.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
225 (20%)
4 stars
491 (44%)
3 stars
300 (27%)
2 stars
65 (5%)
1 star
14 (1%)
Displaying 1 - 30 of 153 reviews
Profile Image for Nick.
Author 5 books10 followers
August 4, 2019
The central thesis of this book is that AI is not good enough. It is much closer to basic statistical inference than something that understands the world like a human. However, this is really all the authors needed. A short OpEd would be just as valuable as writing a 200-page book.

They have a lot of examples, which do advance their point, but it makes the writing feel repetitive. Yes, AI today cannot understand the implied points of a sentence. However, they then end up providing a bunch of similar examples which don't provide additional valuable context.

The proposed solutions are also unhelpful. They say that AI can't understand implied points, so the solution is to do that. Well obviously researchers would do that if they could. The author acknowledges this is hard, but doesn't seem to have any appreciation for this difficulty.

Overall, it's not a great read. The first chapter provides everything you need to know, and after that there's not much point in reading.
Profile Image for Jon Zelazny.
Author 9 books52 followers
December 9, 2025
I’ve known about AI a lot longer than most people, in that one of my best buddies from high school has been in the field since the early nineties. These days, of course, you can’t check your feed without being besieged by AI references, and so many of them are negative, much to the enduring dismay of my guy on the inside.

This year's brouhaha over AI 2027 was the latest doomsday projection that Skynet will soon be able to out-think us, quickly realize how pathetic we are, and either enslave or exterminate us.

And I guess it was pretty persuasive, because the much-vexed colleague who broke it down for me over lunch is no dummy. After listening to her, I called my AI chum and lamented, “Everything she said about AI 2027 sounds like the same old bullshit you’ve always been dubious of, but I don’t have the sci/tech background to adequately rebut it or reassure anybody.”

To that end, he recommended Gary Marcus’s weekly updates on SubStack, which I am now a proud non-paying subscriber of. What I immediately liked about Gary is that you don’t have to be Einstein, or even a high school grad, to understand him. He writes for the average joe.

His current beef is that Open AI, and their flagship product, ChatGBT, have been relentlessly promoting themselves as the future of humanity simply to spur investment in their company. And invest we have: Gary estimates the speculators have raised about $2 trillion (!), but he suspects AI will never deliver products that perform anywhere near what the Sam Altmans are promising. And, when the markets figure out they bet the planet on a dog that won’t hunt, all these ballsy tech bros are going to hit the U.S. government up for an industry-wide bailout that’s going to make the sub-prime mortgage debacle look like a fender bender.

Which is all good and current and informative (not to mention alarming!), but I wanted a basic understanding of what exactly AI is, and what it can and cannot do, so I tried Gary’s 2019 book. The great news is, it’s also written conversationally for those of us outside the field. The bad news is it’s six years old, and you know by page 4 that the tech evolved faster than Gary imagined because he’s sure we won’t have self-driving cars for at least another decade. Which is pretty funny to read in Mid-Wilshire Los Angeles, a primo Waymo testing ground. At this point, I can’t look out the window without seeing a driverless robot taxi cruising past.

Get past that miscalculation though, and this remains a compact, easy-to-digest explanation of how machines learn, what they’re learning, and what they can do with that knowledge. Gary provides all kinds of examples, and when trying to explain what AI lacks, he often refers to the ideal of Rosie, the robot maid from the sixties cartoon THE JETSONS, to describe the kind of intelligence you actually need to do household chores, or prepare something in the kitchen. Another repeated example is a brief exchange from a Laura Ingalls Wilder novel that reveals just how inadequate machines are at understanding what humans simply instinctively know.

At the end of the day, Gary is not dismissive about unchecked development, or potential abuse or misuse of AI, but he comes at it from a clear understanding of which outcomes are more likely, as opposed to instigating panic over relatively improbable risks. And yes, I do feel prepared now to summarize some of the basics whenever AI next comes up in conversation.
Profile Image for Mehrsa.
2,245 reviews3,580 followers
December 7, 2019
I've read a lot of books on AI and the future of tech and economics in general and this is by far the most mature and sober. It's not a downer like some of the books that are all "everything that is capitalism is bad" but it's also not a breathless "AI and tech will save us and change everything." AI is really good at a few things--like playing Go, Jeopardy, finding facts, sorting, etc etc. But it's really bad at all the things that humans basically learn by the time they turn 5--like common sense, reading other people, changing course, just basically walking and stuff too. But, of course. Humans are a product of millions of years of evolution and if you want to think of our brain as an algorithm (as some scientists have), then we are just a super sophisticated one and we barely understand how our own algorithm works. But the book is careful to not be a wet towel. We should definitely push ahead on developing AI, but let's see the snake oil for what it is. In short, we will not have the Jetsons at any point soon and doctors are likely to keep their jobs, but hopefully our Roomba's will stop bumping into things and just quitting and running out of batteries at some point (mine does this every night).
Profile Image for Joy D.
3,136 reviews330 followers
July 29, 2021
This book takes a look at the current state of development of Artificial Intelligence (AI). As one would expect, it is laid out in a logical manner. It traces the history of AI and what has (and has not) been achieved to date. The authors believe that AI is not as far advanced as many people believe, primarily due to a tendency for published articles and headlines to exaggerate accomplishments. A major premise is that AI needs to be trustworthy and safe, and currently falls short of this goal. Areas that need attention are language comprehension, situational awareness, and common sense.

“Statistics are no substitute for real-world understanding. The problem is not just that there is a random error here and there, it is that there is a fundamental mismatch between the kind of statistical analysis that suffice for translation and the cognitive model construction that would be required if systems were to actually comprehend what they are trying to read.”

“If we could give computers one gift that they don’t already have, it would be the gift of understanding language.”


The authors are intentionally trying to reach a wider audience than solely those already familiar with AI progress to date and its terminology. There may be a few terms that need to be looked up depending upon the reader’s background. Chapters include:
1 – The mind gap
2 – What’s at stake
3 – Deep learning and beyond
4 – If computers are so smart, how come they can’t read?
5 – Where’s Rosie?
6 – Insights from the human mind
7 – Common sense and the path to deep understanding
8 – Trust

The authors suggest that we need to move beyond Deep Learning (solving problems through bigger neural networks and larger data sets) and toward building cognitive models. Of course, it is not easy to develop a more robust AI. This book presents evidence that indicates a dramatic shift is needed, arguing that doing more of what we have done in the past is not going to achieve a breakthrough.

There are many examples provided. It may be too detailed for many “general interest” readers but will definitely appeal to techies or non-techies that want to understand AI’s shortcomings and potential. Leaders in the business world would benefit from reading this book.

Bottom line: we have made amazing progress but still have a long way to go.

“Trustworthy AI, grounded in reasoning, commonsense values, and sound engineering practice, will be transformational when it finally arrives, whether that is a decade or a century hence … And the best way to make progress toward that goal is to move beyond big data and deep learning alone, and toward a robust new form of AI — carefully engineered, and equipped from the factory with values, common sense, and a deep understanding of the world.”
Profile Image for Scott Wozniak.
Author 7 books97 followers
December 14, 2020
This book had a masterful balance of possible growth and realistic limits. It got technical enough to be specific but not so much that it got dry. Best AI book I’ve read yet.
Profile Image for Reza Mahmoudi.
24 reviews101 followers
Want to read
August 28, 2019
یوشوا بنجیو یکی از برنده های جایزه تورینگ سال 2018 می گه اگر میخواهید برنده جایزه بعدی تورینگ بشید باید روی چیز دیگری غیر از یادگیری عمیق یا دیپ لرنینگ کار کنید.
Gary Marcus خواندن این کتاب از

نقطه خوبی برای شروع هستش
Profile Image for Nestor Rychtyckyj.
171 reviews2 followers
November 12, 2019
This well-written and very accessible book by Gary Marcus and Ernest Davis should be required reading for anybody that is overwhelmed by the current boom (and hype) in Artificial Intelligence (AI). For most people - the term AI is referring exclusively to Deep Learning and ignoring all of the other significant work that is going on in the area. When every product from golf clubs to vacuum cleaners is now advertised as being “powered by AI”, perhaps it’s time to step back and take a look at where this technology actually is going to take us.

This is precisely the point behind this book: Marcus and Davis actually do know what is happening behind the scenes and their scathing indictment of “AI by press release” should make us wonder how reliable these systems are and how far will a strictly data-driven approach actually take us to real “general AI”. The first part of the book shows that there has been tremendous progress by applying Deep Learning to various problems, but this progress is generally limited to narrow problem domains and this “AI” is actually pretty shallow and cannot be generalized. As we all already know - the hype over autonomous vehicles is slowly fading away with the realization that a true reliable self-driving car that can function in a real-world environment is still years away. Other headline-grabbing stories of AI replacing radiologists or human translators are similarly debunked. Yes, Deep Learning is a tremendous achievement but should not be applied to every problem and will not lead to the type of AI that will truly be game changing.

In the second part of the book, Marcus and Davis do explain that data-driven approaches will never be able to solve problems that require reasoning, common sense and generalization. They then provide an excellent overview of how knowledge-driven approaches will need to be combined with Deep Learning to give us a chance to build robust and reliable AI systems that we can depend on. AI seems to bring out hyperbole and hype more than almost any other technology and makes people think that we are on the verge of Skynet. Unfortunately, this hype quickly leads to disappointment and criticism when outlandish claims are not fulfilled. Marcus and Davis have done a tremendous job in giving us an inside view of where AI really is and provide some good lessons of where AI should go to make meaningful progress in building intelligent machines.
Profile Image for Darnell.
1,441 reviews
November 13, 2019
The first part of this book, covering the limits of current AI research, was quite solid. The number of examples might be a bit excessive, but it helped show me that I've fallen victim to the tendency to make assumptions about rates of progress. The book was worth it for this part.

Unfortunately, the book doesn't have much to offer in terms of solutions despite spending a large number of pages on it. There's no point in saying that AI would be better if we could solve extremely complex problems, especially after discussing how difficult much simpler problems have been.
Profile Image for Nicole.
34 reviews3 followers
July 2, 2022
Rating: 3.5

This book took me way too long to read, the only thing that slightly redeemed the endless repetition was the cheeky jokes.

You'd think that a book praising the human mind for its ability to make inferences would shut the fuck up once and a while and let the reader infer -_-

Writing aside, I liked the information. It wasn't life changing but I especially liked the last chapter about applying good engineering practices to AI.

This book could've been an article except for all of the examples that I definitely found useful. But even then, could probably have been halved if all the repetition was taken out.

I'll probably go back and take notes on some of it, but the jist is that deep learning has no level of comprehension or cognition (because it's just a statistical model). So we should make a model that can do both.

Obviously there isn't a lot on how we can actually do that, because if they'd known they would have just done it. Which is a main frustration with the book, because obviously we should be creating better models and that's why there's so many people in neurology working in the AI field... but alas

Favorite chapters were 5, 7, and 8. Robotic assistant, more technical about formal logic and representations, good engineering practices (respectively)

I also wish it would have covered more about why factory robots were a bad thing to endeavor towards. Like it seems the authors main goal is to have robots with "real" or what they call "flexible" intelligence be home assistants like in the movies. Which yeah sure I totally agree definitely need better intelligence for that, but why is it bad that people are loving deep learning and it's WORKING for things like factory robots?
Maybe they address it and I just missed it.

Ooh also another main question: let's say we make a computational representation of "common sense". Does everyone making AI have access to it? That seems like it would take up a lot of space for every program to reinvent

Read time: 6.5 hours
Profile Image for David.
11 reviews13 followers
January 29, 2021
Video recognition is too narrow and negation in language is too difficult for word embedding-based methods to understand, and the authors are mad at this. They expect more and conclude everyone is working in the wrong direction.

This is a very annoying book. The authors seem to be mad at machine learning researchers for not working on the problems they bring to the table, for each of which they shallowly show how complex they are by referring to existing work (yes, of people working on said problems). There's also sneering.
Profile Image for Divyansh Gupta.
12 reviews5 followers
August 17, 2023
Short, informational, memorable. A little repetitive but still makes a good case for all the things missing in mainstream AI at the moment (broadly, common sense and reasoning). Critical, without being dismissive of the current narrow-domain deep learning based 'AI'. They emphasize the importance of merging reasoning with machine learning, but unfortunately didn't give any examples of current attempts to do that.

Overall I would recommend it to anyone who would like to look beyond the current AI hype.
Profile Image for Harsha Kokel.
57 reviews9 followers
August 10, 2020
This book is written for lay audience who tend to get carried away by impressive headlines. It is a tale of caution to not get excited by the current progress in AI and communicate the research at its scale; not make an exorbitant story out of it. This is important. Not only news articles but even research paper titles have seen a trend to make bold statements, but proving very little. So this book is a great reminder to call a spade a spade.

However, I think the tone of the book is a little snide. Even though authors mention multiple times that they do not want to rubbish the current research rather reevaluate the future research, the book contains very few success stories of AI and a lot of media hyped achievements which are later proven to be apocryphal. Authors could have demonstrated their pride in AI by bring forward more of the exemplary work.

Despite the tone, I think the authors make a good point about the need for multidisciplinary research. AI researchers have to learn from other disciplines; the progress cannot happens in an island. The need for cognition and common sense is important and well acknowledged. How to achieve it is a question left for future. Perhaps this book is also for the agencies that fund AI research. What should be the next area of focus? Where should the academic community invest their time? Definitely not in end-to-end learning.

Quotes:

“Ultimately what has happened is that people have gotten enormously excited about a particular set of algorithms that are terrifically useful, but that remain a very long way from genuine intelligence—as if the discovery of a power screwdriver suddenly made interstellar travel possible. Nothing could be further from the truth. We need the screwdriver, but we are going to need a lot more, too.”


“What the field really needs is a foundation of traditional computational operations, the kind of stuff that databases and classical AI are built out of: building a list (fast food restaurants in a certain neighborhood) and then excluding elements that belong on another list (the list of various McDonald’s franchises).”


“The reason you can’t count on deep learning to do inference and abstract reasoning is that it’s not geared toward representing precise factual knowledge in the first place. Once your facts are fuzzy, it’s really hard to get the reasoning right”

“it is clear that humans use different kinds of cognition for different kinds of problems... the mind is not one thing, but many. ... The brain is a highly structured device, and a large part of our mental prowess comes from using the right neural tools at the right time. We can expect that true artificial intelligences will likely also be highly structured, with much of their power coming from the capacity to leverage that structure in the right ways at the right time, for a given cognitive challenge.”


“AI researchers must draw not only on the many contributions of computer science, often forgotten in today’s enthusiasm for big data, but also on a wide range of other disciplines, too, from psychology to linguistics to neuroscience. The history and discoveries of these fields—the cognitive sciences—can tell us a lot about how biological creatures approach the complex challenges of intelligence: if artificial intelligence is to be anything like natural intelligence, we will need to learn how to build structured, hybrid systems that incorporate innate knowledge and abilities, that represent knowledge compositionally, and that keep track of enduring individuals, as people (and even small children) do.
Once AI can finally take advantage of these lessons from cognitive science, moving from a paradigm revolving around big data to a paradigm revolving around both big data and abstract causal knowledge, we will finally be in a position to tackle one of the hardest challenges of all: the trick of endowing machines with common sense.”
Profile Image for Lucille Nguyen.
452 reviews13 followers
August 17, 2022
Derivative work that draws from others. Generally acceptable for a non-specialist audience, but contains assertions about cognitive science and psychology regarding human intelligence vs. artificial intelligence that may prove short-sighted and premature.

Nonetheless, the prescriptions for how to view AI and how to implement them are the strongest points of the book, a welcome change from the typical over-hype of computational systems.
Profile Image for Alex S.
60 reviews1 follower
July 8, 2021
О проблеме головастика и сумчатой кунице в искусственном интеллекте.

Я дослушал очередную аудиокнигу про когнитивное религиоведение и решил следующую послушать о чем-то близком и родном (и более приземленном). Выбрал эту.

Книжка короткая, за день на 2x можно успеть послушать, и вполне соответствует ожиданиям. В начале (60% книги) психолог и математик пересказывают занятные примеры ошибок систем с машинным обучением.

- Вы обещали нам ИИ, а по-факту оно путает игрушечных черепашек с винтовками, а негров с гориллами!
- Почему, спрашиваем мы, приличный чат-бот, общаясь со стадом расистов, мало того, что не привил им высоких моральных идеалов, так еще и сам стал расистом?
- И да, кстати, почему робо-рабов все еще нет в каждом домашнем хозяйстве?

Оставшихся 40% времени авторы дают советы как исправить ситуацию.
1. Принять нейросети и big data такими какие они есть и перестать проецировать на них свои завышенные ожидания.
2. Постичь модульные системы, символьные вычисления, иерархические репрезентации, каузальное моделирование, семантические сети, формальную логику.
3. Черпать вдохновение из мудрости предков: биологии развития, психологии, лингвистики и нейробиологии.
4. Медитировать над «Критикой чистого разума» и Екклесиастом.
5. Если вы все сделали правильно, то в заключительной главе в мозгу заиграет песенка: “Позабыты хлопоты, остановлен бег, вкалывают роботы - счастлив человек!”
И ничто бы не нарушало эту гармонию. Если бы не русский перевод.

На сайте издательства гиперссылка с имени переводчика ведет к злобной красной табличке “Элемент не найден!” И это явно неспроста (как будет убедительно показано ниже).

Переводчик на каждом шагу изобретает собственные термины вместо существующих. Казалось бы, неужели сложно открыть нужный термин в английской Википедии, а потом переключиться на русскую версию, чтобы увидеть, как это звучит на родном языке? Пришел-увидел-победил. Но нiт!

Вот вам список терминов из перевода, а вы попробуйте угадать, что было в оригинале.
Свёртывание, контролируемое и неконтролируемое обучение, переоснащение, стирание, разработка функций, проблема головастика, супер-разведка, сумчатая куница, весло, дурацкий волшебник.

Ну ладно, если вы в ML, то первые три угадать просто:
• “свёртывание” — convolution, свёртка.
• “контролируемое и неконтролируемое обучение” — supervised/unsupervised learning, обучение с учителем/без учителя.
• “переоснащение” — overfitting, переобучение, оверфиттинг.

Дальше веселее. Богатство образности постепенно превышает болевой порог:
• “стирание” — dropout, дропаут. Да, заимствования это естественно и удобно.
• “разработка функций”, “проектирование функций” — feature engineering, конструирование признаков.
• “проблема головастика” — “long tail” problem (про форму распределения).
• “супер-разведка” — Superintelligence, “Сверхинтеллект“ — книжка одного философа, которому нужно было стать фантастом.
• “сумчатая куница” — "tiger cat”, полосатый кот, один из классов в ImageNet.
• “весло” — paddle, теннисная ракетка в игре Breakout. И дальше по тексту переводчик уверен, что игрок гребёт.

На фоне проблем головастика, мои личные проблемы, конечно, меркнут. Но дальше — лучше. Иногда переводчик вставляет “пояснения”, которые полностью меняют смысл.
• “дурацкий волшебник (вспомните джиннов из сказок Шахерезады)” — idiot savant. Ну да, мы же все помним, что ИИ это про магию, а не про аутистов.
• “лингвистическая база Word2Vec” — representations like Word2Vec. Это не база.
• “преобразование голоса в текст и ��омпьютерные команды” — voice recognition. Вообще-то, идентификация по голосу.
• “вы создаете у системы все расширяющееся чувство вины” — to do “blame assignment” in complex networks, то есть “искать виноватых” внутри сложных сетей. Речь про алгоритм обновления весов в нейронной сети.

“How did we get into this mess?” — спрашивают авторы.
“Откуда берется вся эта абракадабра?” — удивляются переводчики.

А большинство фейлов объясняется просто: засуньте текст в Google Translate и вы получите именно такой перевод. Иронично выходит — авторы всю дорогу унижают гугловый сервис, а вот переводчики, наоборот, почитают его за образец для подражания. Но это объяснение слишком скучное, чтобы быть истинным.

Поэтому, вот вам более рациональное в стиле Юдковски:
Недружественный ИИ из буд��щего нарочно испортил книгу, которая помешает его появлению, если вы ее прочтёте, а потом оставил сообщение на сайте издательства, чтобы похищенных им переводчиков никто даже не пытался разыскать.

Ну вот, восторгом я с вами поделился, так что теперь можно идти дальше создавать все расширяющееся чувство вины у нейронных сетей.
Profile Image for Disnocen.
20 reviews
August 8, 2020
Summary

The book is aimed at an audience of readers who are fascinated by the possibilities of AI but who are not technicians in the field. With this book, this reader is able not to be uninformed when reading blog articles on the subject.
The main point is that nowadays IA is not robust (i.e. it cannot be predicted when and to what extent it will go wrong) and for safety and security reasons it cannot be used in all areas (such as driving a car).

Review

The authors analyze the state of the art of AI. They guide the reader in learning by dividing the current issues into 8 chapters/points of view. Each point of view is supported by many examples. On the one hand, with these examples the authors want to suggest questions to the reader when watching videos or reading newspapers or online blogs. On the other hand, due to many equivalent examples, the writing is verbose and it makes you want to skip a few paragraphs. Probably just one example per theme combined with links for further information (which are missing throughout the book) would have made reading more enjoyable.

Both authors are very qualified: there are many academic articles of both of them on the web on these AI topics that have received many quotes

The main message of the authors is that as of today artificial intelligence is not ready to be deployed in all areas. The main reason is that it is not robust*. In layman terms, this means that you can't predict when an AI makes a mistake, why it makes a mistake and how often it makes a mistake. Consequently, artificial intelligence can be used for non-threatening systems (e-commerce recommendations, voice-to-text systems) but not in cases where human life is in danger (doctors, self-driving cars). The fact that it is not robust also poses the question of "Superintelligence" (Bostrom) much further into the future.

One wonders why the image we get from the mass media is completely different. The answer, in the words of the authors, is that videos and articles on the internet report successes in "carefully constructed" scenarios, but "true success lies in getting the details right". We need to see if the successes reported in the media are reproducible in "complicated and unpredictable" scenarios.


Reading this book, I get the impression that AI solutions development follows a Pareto distribution: you get 90% of the results in 10% of the time, but you need the remaining 90% of the time to get the last 10% of results (the accuracy).


* the idea of a robust AI is so ingrained in Marcus that he calls the company he founded Robust.AI
Profile Image for Kyle.
421 reviews
May 3, 2020
This is a nice, fairly short, introduction to the current limits to deep learning and AI. The authors point out how to watch for hype, explain where we actually are currently, and give suggestions on how we should approach making general AIs rather than the narrow AIs we currently have.

As somewhat of a skeptic when it comes to AI as it is now (I wouldn't trust a self-driving car right now), it is nice to see a comprehensive accounting for the problems AI now has while still acknowledging the amazing advancements made in the area. The problem does seem to be that common sense is not easy to program or learn (for machines) with our current methods. I also like that the authors focus us on practical AI problems rather than the theoretical ones of superintelligences that are very likely far in the future.

While I found their discussions of a different approach interesting on how to get towards giving AI common sense, the suggestions still seem rather abstract to me. It's not clear to me how exactly one should go about doing it with computer programming after reading the book. It seems like coming up with a good way of properly conceptualizing and representing common sense is the problem, so I can't really fault them for that.

If you'd like to have a very readable introduction to AI and what to look out for, then I'd strongly recommend the book. It is skeptical without being too negative, also giving praise where it is due.
Profile Image for Yunke Xiang.
17 reviews8 followers
January 6, 2020
This book tries to argue that we need some paradigm change for the current AI development. Instead of building machines that’s primarily fueled by big data and can handle specific tasks, we should have bolder vision and action and design machines that actually understands the world (have common sense, capable of reasoning).

The book has offered a lot of examples on where current AI long on promise but short on delivery. I enjoyed reading it because these are all most up to date examples from the big development (e.g. criticism for IBM Watson’s from the oncologists who actually used it, the most recent Tesla car accident).

I do feel that the book itself has started to make some empty and vague promise itself when the authors start to lay out their ideas of the “better” path for AI which needs to have hybrid structure that incorporate innate knowledge and abilities that represent knowledge compositionally. They also suggest we need to inject common knowledge and sense into AI and enable them to judge when run into extreme situations.

As much as I hope AI will do that. I do feel these are more like “nice wishes”. It is good to write them out like this as suggestions for real AI researchers but not so nice to complain while other people are doing the real work.

That said, I still enjoyed reading this book and a lot of the descriptions on algorithms are accurate and informative.

Nice read!
Profile Image for Jonathan Crabb.
Author 1 book13 followers
November 29, 2020
One of the best technology books that I have read in the past several years. Books related to technology tend to become out of date very quickly so I was happy to have picked this recent work up. The book does a very good job bringing the reader up to speed on the current state of AI and then explaining the nature of the current advances, both in its promise and its sever limitations. For the most part, this is done in very accessible language which most readers would be able to understand which is no small feat given the topic. This book is a great read for anyone wanting to understand more about AI in general and especially around the limitations of current AI trends like deep learning networks and machine learning. This is more of an introductory book than one meant for AI practitioners, and yet I would be interested to see a data scientist response.

Great read that I will reference back to.
228 reviews2 followers
July 17, 2023
Bought this before the ChatGPT explosion but didn’t start it until after. The book is extremely insightful and I learned a lot about the fundamental limitations of deep learning neural net technology.
Gary Marcus has spent the first half of 2023 devoting himself to immense public service shaping the incredibly necessary skeptical debate about limits and dangers of the current AI hype cycle and deployment without guard rails.
I found that following Gary Marcus’ substack blog and Twitter feed provided an even better education than the book from 2019.

Fun factoid on the side: I only realized late that Gary is also the author of the insightful and entertaining book “Guitar Zero”, which I read last year.
Profile Image for Filip Ilievski.
Author 3 books2 followers
May 24, 2020
Brilliant storytelling and a balanced view of today's AI. As an AI researcher, this book was very suitable for me, though I expect it to be easy to follow by laymen too. I especially enjoyed the many examples throughout the book. The writing could be more compact, but I can leave with that
Profile Image for Barry.
600 reviews
September 2, 2020
Naturally I broadly agree with the call for symbolic knowledge representation, allowing mechanical reasoning, to be brought (back) to bear in combination with deep learning, which is inherently limited in statistical approximation of intelligence.
Profile Image for Jurijs.
18 reviews1 follower
January 31, 2022
Authors continues to debunk the hype around ML and AI, arguing that approach to IA must me changed, because general AI can not be reached with current methods. One can not reach the moon by climbing to slightly taller tree.

Book is not technical at all, good for novice.
Profile Image for Jesus.
284 reviews47 followers
February 10, 2023
A must read to understand the challenges to create artificial intelligence reliable in open world situations. A good counterweight to the current hype with language models.
Profile Image for Vincent.
12 reviews
October 22, 2022
Gives a fairly balanced view of the current state of AI.
Profile Image for Dan.
321 reviews3 followers
January 4, 2022
An excellent summary of AI, intelligence, and considerations to safely harness the power
I enjoyed the author's logical flow in describing intelligence and what it means to solve a problem without creating one. There is no unnecessary fantasy or science fiction.

His problem definition is relevant to us today rather than being remote or under the realm of fiction.
Our current systems have nothing remotely like common sense, yet we increasingly rely on them. The real risk is not superintelligence, it is idiots savants with power, such as autonomous weapons that could target people, with no values to constrain them, or AI-driven newsfeeds that, lacking superintelligence, prioritize short-term sales without evaluating their impact on long-term values.
10 reviews
February 29, 2020
Provides great insight into the state of AI, how far it has come and how far it still has to go to attain the current levels of hype.
Profile Image for Ricardo Acuña.
137 reviews17 followers
November 20, 2019
Throughout the history, there are generally cycles that oscillate between the extremes of two dialectically opposed positions resulting in a new stage in the historical development of contraries. REBOOTING AI analyzes the current hype of the AI, and especially the "Deep Learning". The AI has reached such a point that it covers a good part of startup investments, technological developments, new products, and even politics. REBOOTING AI on this sense analyzes this current AI hype emphasizing that AI is essentially a set of statistical algorithms, which are still far from a real and strong intelligence.

The rhetoric existing in publications, announcements of new products, developments or research has messianic dyes according to G. Marcus. The problem is that the industry exaggerate the announcements, capabilities, functionalities and possibilities of AI. The truth is that the current AI has a very short and reduced scope. The tasks AI can do are very specific, within a delimited domain. The present AI is a kind of digital idiot savant, very capable in pattern detection but with zero understanding. AI cannot deal with a real world that is open, and that is not limited in specific contexts.

The book argues extensively and with many examples that Deep Learning is not the panacea to AI in the long term. Deep Learning has many limitations and it is not foreseeable that in the future it cannot be a solution to achieve strong AI. AI can only work with a large amount of data to learn and statistical algorithms to identify patterns. This restraint is becoming increasingly evident. G. Marcus proposes that you need to use cognitive architectures, using the concepts and research of classical AI, cognitive psychology and neurosciences.

G. Marcus details throughout the book, the difficulties of AI in linguistics and natural understanding of language. The examples are profuse, and sometimes repetitive. With just one example, it would be enough to capture the idea. Although the book is for the general people reading, I consider that some sections are a bit hard and repetitive, explaining the cognitive processes and semantic analysis of texts that are required for AI.

G. Marcus´s summary and proposal to the current limitations of AI is that AI requires to use complex computational cognitive models and not just neural networks with pattern detection. Although G. Markus refers to several books and publications related to the subject, it seems to me that it would have been good to talk about research and advances in Computational Psychology (for example: The Cambridge Handbook of Computational Psychology). G. Markus says that we need a new generation of AI researchers who know well and appreciate classical AI, machine learning and computer science more broadly, and take advantage of AI's historical knowledge base.

AI must evolve and reboot going from just recognizing patterns without understanding, to an understanding of what it perceives, to have common sense and to deal with causality. AI is, in general, on the wrong path, with limited intelligence for just narrow tasks, learned with big data and without deep understanding. G. Markus's proposal is to achieve an AI that has a) common sense, b) cognitive models, and c) reasoning.

However given the AI current limitation is worth to consider that AI is increasingly playing an important role that impact our daily lives, in the social, political, industrial, health and commercial realms. Undoubtedly AI is deeply transforming how we purchase, decide, socialize and care our health.

I think . REBOOTING AI is a good book that provides a critical review of the current development of AI. It provides a contrasting view of AI´s current hype.
Profile Image for Eric.
33 reviews5 followers
October 15, 2020
(Spoiler free part)

The title is more telling than I first thought. The book is really about rebooting AI efforts, implying reconsidering 60 years of AI, and correcting the arguably poor direction of the Deep Learning-focused field/industry now. The authors do a very good job, going all the way back to the beginning of AI, presenting compelling arguments from their areas of expertise, and venturing in other key areas. The whole is to me restricted and biased, yet solid and constructive. Restriction and bias sound negative (they also imply focused and decision-driven), but they really indicate that the whole point is about rebooting, not fancying but gathering what we know and resume our efforts on a more promising track: Deep Learning is currently and practically promising for advertisers and military, still flaky in drug discovery and CT scans despite huge investments in the current track.

I would recommend this book to readers who would like perspective on AI, unplugged from the mainstream and short-sighted newsfeeds. Some topics are technical, but the authors often have illustrative examples to pin their ideas. The writing is also fluid and engaging.


(Spoilers ahead)

I would like to cite here a short passage that sets the tone of the book, and its core point:

"Our biggest fear is not that machines will seek to obliterate us or turn us into paper clips; it's that our aspirations for AI will exceed our grasp. Our current systems have nothing remotely like common sense, yet we increasingly rely on them. The real risk is not superintelligence, it is idiots savants with power, such as autonomous weapons that could target people, with no values to constrain them, or AI-driven newsfeeds that, lacking superintelligence, prioritize short-term sales without evaluating their impact on long-term values." p198-199 in the hard copy.

I for one agree very much with this representative passage, for slightly different yet compatible reasons (my background differs from the authors).

A powerful aspect in the book is to introduce almost nothing new, and identify issues that require new and more work. It suggests to consider recent advances in Deep Learning, in the context of "good-old fashion" AI (GOFAI), mash up the two, and solve the remaining problems. To be blunt, nothing original here. The value of the book is to ground this suggestion in the present, with many well-thought short examples and detail from multi-disciplinary perspectives like Psychology, Engineering and Biology. So it should be very good at clarifying challenges and ways ahead.

Two last comments touching to areas where I have more background: AI itself, and software verification.

A blunt comment above states there is "nothing new" in the suggestion. From the AI field perspective, the book recognizes the strengths of GOFAI and DL, and explains they complement each other (e.g. GOFAI is general but brittle; DL is narrow and more robust on its narrow focus). The thing is that there are alternatives to the mainstream GOFAI that are glossed over (e.g. the implications of seeing the mind as a "society of minds", a.k.a. a multi-agent system). The book suggestion is very pragmatic in the light of AI history, yet remains "mainstream" with respect to that history.

Software verification might seem like a surprising topic in the book. I was actually very pleased the authors dedicated several passages to this critical discipline. The explanation and conclusions are clear and correct. They are not going far enough, though. Verification specialists know the challenges and limits of current solutions against "traditional software/systems" (verification is originally from the hardware world). The same specialists will be quick at pointing out "AI software/systems" all lie beyond these limits, by far.

This is a good book to get an overview of what AI means circa 2019, and understand current boiling industry efforts promise too much out of Deep Learning. AI history shows we know better, and clearly need a mindset reboot.
Displaying 1 - 30 of 153 reviews

Can't find what you're looking for?

Get help and learn more about the design.