It surely seemed to me to be a very solid primer on the subject, of which I was in a dire need, so that I feel that now I have at least a modicum of basis to think about this, and some foundation to read further. Some of the essay was pretty illuminating, but on the essential, I'm still mystified, probably even more so. Questions of intelligence... But then, “Trying to find the “core” or “essence” of intelligence is probably a fool’s errand, much like finding the essence of funkiness, beauty, or the 1980s.” Ok...
« I would like to reflect on what it means that the terms intelligence, artificial intelligence, and artificial general intelligence are so stubbornly hard to define. Does this mean that we just haven’t understood what intelligence or artificial intelligence is yet? But that assumes we are studying a natural kind when we study intelligence, that is, a grouping that reflects something real in nature. I don’t think that is the case. I think intelligence is a word we made up to represent a somewhat arbitrary set of capabilities that humans tend to possess. Trying to find the “core” or “essence” of intelligence is probably a fool’s errand, much like finding the essence of funkiness, beauty, or the 1980s. There is no secret sauce, fundamental principle, or “one simple trick” to intelligence. Things don’t get any more definite if we place the word artificial before intelligence. Artificial intelligence was just the name of a seminar in 1956 that somehow also became the name of a sprawling research field and the various technologies that emerged from it. That the various technologies discussed in this book are all referred to as AI is mostly a historical accident and/or marketing. We could have used a different term or several different terms. In fact, some of the early neural network research was done under the moniker “cybernetics.” »
It's complicated.
The book is extraordinarily relevant and current, even if written more than one year ago, in a field that's exploding with revolutionary advances every day. Let me prove this with random instances from this week.
Yesterday, on X, Andy Boreham expounded on having asked Grok and DeepSeek who he is, and DeepSeek hallucinated (“I have to admit, I really am NOT an expert on these AI models and how they work. Because of that, I have no idea how DeepSeek got it so wrong. I wouldn't be surprised if the AI said "I don't know who that is" or something, but it literally made things up. Any ideas? (...) In case you don't know about my past, DeepSeek didn't just get my marital status / private life completely wrong, it chose Stuff, a real New Zealand media outlet, and said I work there. I have NEVER worked for Stuff.”). This book explains perfectly the technical fundamentals of this occurrences in LLM AI systems (and what these are; if you are now at a loss, read it).
This past week, the guy from OpenAI made a plea for some sort of sanctions on DeepSeek, a technology that's much more open than the evermore closed approach to AI development that “OpenAI” is pursuing. Considerations from the book:
“Our computer systems are relatively safe today because so many software developers and system administrators take security so seriously and share their knowledge freely and openly. Cybersecurity is studied in both academia and industry, and papers and source code are shared openly. You might think that this would give an advantage to attackers, but the opposite is actually true. The best way to test your defense strategy is to let other hackers or researchers try to attack it and share what they learned. And the best way to rapidly enhance security methods is to let hackers and researchers freely build on each other’s solutions. Besides, trying to stop bad actors from sharing attack strategies with each other would clearly be futile, so it makes sense for good actors to employ the same strategy.
“I think this is the future we want for AI methods and models as well. As much of AI research and development as possible should be conducted openly and transparently. This means not only that the model parameters and the code used for training should be open-source but also that researchers and developers openly publish their methods and findings. This approach will allow new models and methods to be tested by anyone in the world with the technical capacity, leading not only to more innovation but also to more safety. A society where as many people as possible have access to, understand, and can contribute to modern and can contribute to modern AI will be safer from whatever risks AI systems might bring.”
One other aspect of the essay that I did love was the author's consideration for the relevance of science-fiction in the development of the thought and even lines of progress of this technology.
«
All these theories about intelligence and artificial intelligence can feel quite abstract. They don’t necessarily help us imagine what AGI would be like. In chapter 5, we explore some visions of what AGI could be like, with ample reference to science fiction. Science fiction stories have inspired generations of AI researchers and can help us not only think about but also differentiate between potential AI futures.
To understand the many different things people mean when they talk about AGI, let us try to draw out the ways in which different concepts of AGI differ. These can be seen as dimensions along which concepts of AGI can vary. We could use these dimensions to organize and compare different visions of AGI. Because no actual AGI exists, we cannot use examples from the real world to illustrate these ideas, so we will use the next best thing: examples from science fiction.
Disembodied mind: Iain M. Banks’s Culture novels are set in a utopian civilization where humanlike beings coexist with Minds, enormously intelligent machines. Minds don’t have bodies of their own but are largely responsible for keeping things running in society and thus control a large variety of mobile robots. Banks envisions Minds as having many humanlike traits, including empathy and a sense of humor; they have a great deal of intentionality. Basically, they are much like humans, only with a thousand or a million times greater memory, attention span, precision, and processing speed. A Mind has an enormous knowledge bank but must still acquire knowledge as we do, through communication and observation. A Mind is not omniscient. The novels in the Culture universe feature many examples of Minds not knowing what to do or how to do it, because they don’t have the requisite knowledge.
The closest we can get is probably the many first-contact stories written by various science fiction authors. As mentioned earlier, the experience of first contact with a truly alien intelligence is a central theme of Stanisław Lem’s work. It is often not clear in Lem’s stories whether we experience a biological or machine-based intelligence, but the being’s thinking is in some sense orthogonal to ours. China Miéville’s stories also feature examples of very alien and fundamentally unfathomable intelligences, for example, in Perdido Street Station or Embassytown. One might argue that to the extent an AI system is built by humans, it will not be truly alien, but as we will see in chapter 8, an open-ended learning system might learn from a self-created world that is substantially different from the one we inhabit and therefore learn skills that are very different from ours.
»
Fantastic.
Last thoughts.
“I have discussed AI as a series of technical inventions motivated by being able to solve problems that require intelligence (...) The alternative perspective is that the history of AI is a long deconstruction of the concept of intelligence. This proceeds by someone confidently exclaiming that something—say, planning, image creation, or translation—is a hallmark of real intelligence. The research community then finds a way to perform this feat using some new or old AI method. We then look at the original task and say that it didn’t really require intelligence after all, because it can be done with mere computation. Therefore we need to find another feat that really requires intelligence. And then we do this again and again, chipping away at the concept of intelligence. The job will be done when we can no longer find anything that we can claim requires intelligence and that we cannot get a computer to do as well as we do it. The concept of intelligence will then become pointless, except in the colloquial use of the term. At that point, we may or may not want to say that we have achieved AGI.”
And: “there is not really anything magical about AI. It’s just a set of useful technologies, some of which might change the world—in the way that clocks, stirrups, steam engines, and telephones all once did. This realization is a little painful for those of us who chose to become AI researchers because of that magic, those of us who wanted—and in some sense still want—to understand the mind by creating minds with computers. There are indeed plenty of interesting technologies to develop and phenomena to understand. There’s just not any great mystery to solve.” N. B.: In this I do not believe.
Anyway, great little book.