Jump to ratings and reviews
Rate this book

Artificial General Intelligence

Rate this book
How to make AI capable of general intelligence, and what such technology would mean for society. Artificial intelligence surrounds us. More and more of the systems and services you interact with every day are based on AI technology. Although some very recent AI systems are generalists to a degree, most AI is narrowly specific; that is, it can only do a single thing, in a single context. For example, your spellchecker can’t do mathematics, and the world's best chess-playing program can’t play Tetris. Human intelligence is different. We can solve a variety of tasks, including those we have not seen before. In Artificial General Intelligence, Julian Togelius explores technical approaches to developing more general artificial intelligence and asks what general AI would mean for human civilization. Togelius starts by giving examples of narrow AI that have superhuman performance in some way. Interestingly, there have been AI systems that are superhuman in some sense for more than half a century. He then discusses what it would mean to have general intelligence, by looking at definitions from psychology, ethology, and computer science. Next, he explores the two main families of technical approaches to developing more general artificial foundation models through self-supervised learning, and open-ended learning in virtual environments. The final chapters of the book investigate potential artificial general intelligence beyond the strictly technical aspects. The questions discussed here investigate whether such general AI would be conscious, whether it would pose a risk to humanity, and how it might alter society.

240 pages, Paperback

Published September 24, 2024

40 people are currently reading
1881 people want to read

About the author

Julian Togelius

14 books22 followers
Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering at New York University, the director of the NYU Game Innovation Lab, and co-founder of the game AI startup modl.ai.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
31 (26%)
4 stars
43 (36%)
3 stars
36 (30%)
2 stars
7 (5%)
1 star
2 (1%)
Displaying 1 - 16 of 16 reviews
Profile Image for Pedro L. Fragoso.
852 reviews64 followers
March 15, 2025
It surely seemed to me to be a very solid primer on the subject, of which I was in a dire need, so that I feel that now I have at least a modicum of basis to think about this, and some foundation to read further. Some of the essay was pretty illuminating, but on the essential, I'm still mystified, probably even more so. Questions of intelligence... But then, “Trying to find the “core” or “essence” of intelligence is probably a fool’s errand, much like finding the essence of funkiness, beauty, or the 1980s.” Ok...

« I would like to reflect on what it means that the terms intelligence, artificial intelligence, and artificial general intelligence are so stubbornly hard to define. Does this mean that we just haven’t understood what intelligence or artificial intelligence is yet? But that assumes we are studying a natural kind when we study intelligence, that is, a grouping that reflects something real in nature. I don’t think that is the case. I think intelligence is a word we made up to represent a somewhat arbitrary set of capabilities that humans tend to possess. Trying to find the “core” or “essence” of intelligence is probably a fool’s errand, much like finding the essence of funkiness, beauty, or the 1980s. There is no secret sauce, fundamental principle, or “one simple trick” to intelligence. Things don’t get any more definite if we place the word artificial before intelligence. Artificial intelligence was just the name of a seminar in 1956 that somehow also became the name of a sprawling research field and the various technologies that emerged from it. That the various technologies discussed in this book are all referred to as AI is mostly a historical accident and/or marketing. We could have used a different term or several different terms. In fact, some of the early neural network research was done under the moniker “cybernetics.” »

It's complicated.

The book is extraordinarily relevant and current, even if written more than one year ago, in a field that's exploding with revolutionary advances every day. Let me prove this with random instances from this week.

Yesterday, on X, Andy Boreham expounded on having asked Grok and DeepSeek who he is, and DeepSeek hallucinated (“I have to admit, I really am NOT an expert on these AI models and how they work. Because of that, I have no idea how DeepSeek got it so wrong. I wouldn't be surprised if the AI said "I don't know who that is" or something, but it literally made things up. Any ideas? (...) In case you don't know about my past, DeepSeek didn't just get my marital status / private life completely wrong, it chose Stuff, a real New Zealand media outlet, and said I work there. I have NEVER worked for Stuff.”). This book explains perfectly the technical fundamentals of this occurrences in LLM AI systems (and what these are; if you are now at a loss, read it).

This past week, the guy from OpenAI made a plea for some sort of sanctions on DeepSeek, a technology that's much more open than the evermore closed approach to AI development that “OpenAI” is pursuing. Considerations from the book:

“Our computer systems are relatively safe today because so many software developers and system administrators take security so seriously and share their knowledge freely and openly. Cybersecurity is studied in both academia and industry, and papers and source code are shared openly. You might think that this would give an advantage to attackers, but the opposite is actually true. The best way to test your defense strategy is to let other hackers or researchers try to attack it and share what they learned. And the best way to rapidly enhance security methods is to let hackers and researchers freely build on each other’s solutions. Besides, trying to stop bad actors from sharing attack strategies with each other would clearly be futile, so it makes sense for good actors to employ the same strategy.

“I think this is the future we want for AI methods and models as well. As much of AI research and development as possible should be conducted openly and transparently. This means not only that the model parameters and the code used for training should be open-source but also that researchers and developers openly publish their methods and findings. This approach will allow new models and methods to be tested by anyone in the world with the technical capacity, leading not only to more innovation but also to more safety. A society where as many people as possible have access to, understand, and can contribute to modern and can contribute to modern AI will be safer from whatever risks AI systems might bring.”

One other aspect of the essay that I did love was the author's consideration for the relevance of science-fiction in the development of the thought and even lines of progress of this technology.

«

All these theories about intelligence and artificial intelligence can feel quite abstract. They don’t necessarily help us imagine what AGI would be like. In chapter 5, we explore some visions of what AGI could be like, with ample reference to science fiction. Science fiction stories have inspired generations of AI researchers and can help us not only think about but also differentiate between potential AI futures.

To understand the many different things people mean when they talk about AGI, let us try to draw out the ways in which different concepts of AGI differ. These can be seen as dimensions along which concepts of AGI can vary. We could use these dimensions to organize and compare different visions of AGI. Because no actual AGI exists, we cannot use examples from the real world to illustrate these ideas, so we will use the next best thing: examples from science fiction.

Disembodied mind: Iain M. Banks’s Culture novels are set in a utopian civilization where humanlike beings coexist with Minds, enormously intelligent machines. Minds don’t have bodies of their own but are largely responsible for keeping things running in society and thus control a large variety of mobile robots. Banks envisions Minds as having many humanlike traits, including empathy and a sense of humor; they have a great deal of intentionality. Basically, they are much like humans, only with a thousand or a million times greater memory, attention span, precision, and processing speed. A Mind has an enormous knowledge bank but must still acquire knowledge as we do, through communication and observation. A Mind is not omniscient. The novels in the Culture universe feature many examples of Minds not knowing what to do or how to do it, because they don’t have the requisite knowledge.

The closest we can get is probably the many first-contact stories written by various science fiction authors. As mentioned earlier, the experience of first contact with a truly alien intelligence is a central theme of Stanisław Lem’s work. It is often not clear in Lem’s stories whether we experience a biological or machine-based intelligence, but the being’s thinking is in some sense orthogonal to ours. China Miéville’s stories also feature examples of very alien and fundamentally unfathomable intelligences, for example, in Perdido Street Station or Embassytown. One might argue that to the extent an AI system is built by humans, it will not be truly alien, but as we will see in chapter 8, an open-ended learning system might learn from a self-created world that is substantially different from the one we inhabit and therefore learn skills that are very different from ours.

»

Fantastic.

Last thoughts.

“I have discussed AI as a series of technical inventions motivated by being able to solve problems that require intelligence (...) The alternative perspective is that the history of AI is a long deconstruction of the concept of intelligence. This proceeds by someone confidently exclaiming that something—say, planning, image creation, or translation—is a hallmark of real intelligence. The research community then finds a way to perform this feat using some new or old AI method. We then look at the original task and say that it didn’t really require intelligence after all, because it can be done with mere computation. Therefore we need to find another feat that really requires intelligence. And then we do this again and again, chipping away at the concept of intelligence. The job will be done when we can no longer find anything that we can claim requires intelligence and that we cannot get a computer to do as well as we do it. The concept of intelligence will then become pointless, except in the colloquial use of the term. At that point, we may or may not want to say that we have achieved AGI.”

And: “there is not really anything magical about AI. It’s just a set of useful technologies, some of which might change the world—in the way that clocks, stirrups, steam engines, and telephones all once did. This realization is a little painful for those of us who chose to become AI researchers because of that magic, those of us who wanted—and in some sense still want—to understand the mind by creating minds with computers. There are indeed plenty of interesting technologies to develop and phenomena to understand. There’s just not any great mystery to solve.” N. B.: In this I do not believe.

Anyway, great little book.
Profile Image for Marks54.
1,565 reviews1,216 followers
May 16, 2025
This is the second entry I have read in this MIT series of short books on current “essential” tech issues. This book is about “artificial general intelligence”. The book begins by going through a deconstruction of the basic terms, in particular “intelligence” and “artificial intelligence” and “general” versus more specific intelligences. The result? AGI is a neat, popular, and even “trendy” topic area that is largely aspirational and there is little related to AGI that is real or substantial. The author moves to a history of AI efforts and even discusses what AGI might be. Given its lack of reality, Professor Togelius deftly moves through the potential of AGI as shown in movies. He provides some interesting chapters on the mechanics of AI models and concludes with a tour of potential directions.

This is a well intended and well written introduction that is thorough and honest, subject to the limits of a short monograph.

I think I will read more of these books.
230 reviews5 followers
June 6, 2025
Short book mostly on philosophical topics. I'd say essays collection.

Summary: both AI and AGI are hard.
Profile Image for Behrooz Parhami.
Author 10 books35 followers
January 12, 2025
I listened to the unabridged 4-hour audio version of this title (read by Steve Marvel, Ascent Audio, 2024).

We are surrounded by Artificial intelligence. Knowingly or unknowingly, we use AI on a daily basis. Most AI is narrowly-focused and has highly-specific functionality, such as spell-checking or GO-playing. A spell-checking app cannot do math and a GO-playing program cannot play Tetris. Human intelligence is somehow more general, because we can solve a variety of tasks, including those we have never encountered before. Developing artificial general intelligence (AGI) is the holy grail of today’s AI Research.

According to NYU’s Professor Togelius, even human intelligence isn’t really general. We have developed an ad-hoc collection of analysis and decision-making skills through evolution. Therefore, even if we replicated human intelligence in a robot, it’s unclear that everyone would agree that the robot possessed AGI. We’ve had AI systems that are superhuman in some sense for more than half a century. To take the next step toward AGI, we need to develop a clear definition of what we are trying to achieve. We should look at definitions from the perspectives of psychology, ethology, and computer science.

Theoretically, an evolutionary method may be successful in developing AGI, but it will probably take too long. There are two promising families of technical approaches to developing AGI: Foundation models through self-supervised learning and open-ended learning in virtual environments. The designation self-supervised means that learning occurs without someone first going through massive amounts of data and labeling them. As for learning in virtual environments, the transition from a simulated world to the real world, where you may not be able to try things multiple times, is non-trivial.

In Chapters 9-11, Togelius investigates the potential of artificial general intelligence beyond the strictly technical aspects. Among the questions discussed are whether such general AI would be conscious, whether it would pose a risk to humanity, and how it might alter society.
29 reviews
November 23, 2024
It's a really good history of AI and he certainly could be correct in his opinions. However, I think this section reveals either excessive skepticism or was written before the latest models:

Seeing the power and versatility of LLMs, some people claim that they are the first steps toward AGI. I don’t agree, and not only because we don’t have a good definition of AGI. LLMs have plenty of shortcomings: they are generally bad at reasoning and planning, can’t (on their own) really do math, and, perhaps most importantly, are extremely unreliable. Even the best LLMs hallucinate frequently in some situations. This makes them unsuitable on their own in many situations. However, it is possible to build systems around LLMs that amplify their power in various ways.


It seems that the talking heads in the industry (Altman, Zuckerberg, Amodei) believe that LLMs are directly leading to AGI in the near future.

Also, his reluctance to define AGI after spending so many words on it drags down entire book for me:

In fact, I think it would be best if we all simply stopped talking about AGI. It is leading us astray from the more important questions, which tend to focus on particular applications of AI technology and their consequences for society.


I totally agree with his point but he doesn't seriously explore those interesting questions (mostly vague predictions in chapter 11). A much more thought provoking approach is Dario's definition and optimistic exploration of what a powerful AI will do to our society: https://darioamodei.com/machines-of-l...

I also liked this related interview a lot: https://www.youtube.com/watch?v=E7JsL...
Profile Image for Chris.
115 reviews4 followers
November 8, 2024
3.5 Surprisingly, I didn't learn as much as I expected from this book. I am not sure if that is a reflection of my own knowledge, or a warning sign that the details were too vague and surface-level. Nonetheless, this book provided a helpful framework to understand why debates over AI systems are so contentious and why philosophers and ethicists struggle to even agree on common definitions. More details would have been nice, but in the author's defence, he is clearly writing for people like me with no coding background who might struggle to understand the actual mechanics of various systems. Overall, this was a good overview and a fun read.
Profile Image for Martin.
7 reviews
May 14, 2025
Great book to learn concepts about AGI and its definitions (or rather its issues in defining it as described in the book)! It starts with a short history of the development of artificial intelligence, focusing on different methodologies, and touches on consciousness and its impact on society. It also attempted the ambiguous definitions of intelligence and how to measure it as well as what it would like for intelligence to be "general".

TLDR of the book - No. AGI is impossible (at least in this lifetime)
Profile Image for Horia  Calborean.
439 reviews1 follower
October 18, 2024
I liked the book. Can you actually learn something? I don't know. If you don't know the subject you will not understand much. If you do know the subject, you probably know also what the book contains. I enjoyed it because it probably confirms my beliefs (formed by the writings of Andrew Ng). Short version : we have no idea how to define AGI. Worrying about AGI is like worrying about traffic congestion on Mars.
125 reviews2 followers
December 28, 2024
This was a good overview, and it covered quite a bit of ground, with a reasonable but not overwhelming level of detail. But it was also very cautious with any sort of claim. The author was careful not to dive too deep into philosophical questions and kept things mostly technical, which, I think is okay given the size of the publication.
Profile Image for Simon Mcleish.
Author 2 books141 followers
January 15, 2025
Should be interesting, and the MIT label suggests an expert in the field. But I found it unreadable because of the large number of factual errors even in the first few pages, such as dating the Principia Mathematica a century earlier than it's actual date. DNF (unsurprisingly).
Profile Image for Alex.
26 reviews11 followers
January 1, 2025
I'm a total layman, so much of it was helpful. However I found a lot of the back half sketchy (perhaps there's an inherent sketchiness there).
Profile Image for Angela.
501 reviews5 followers
February 5, 2025
This book targets a wider audience since it tries to explain AI and AGI with pop culture references and basic examples.
Displaying 1 - 16 of 16 reviews

Can't find what you're looking for?

Get help and learn more about the design.