Jump to ratings and reviews
Rate this book

The Promise of Artificial Intelligence: Reckoning and Judgment

Rate this book
An argument that—despite dramatic advances in the field—artificial intelligence is nowhere near developing systems that are genuinely intelligent.In this provocative book, Brian Cantwell Smith argues that artificial intelligence is nowhere near developing systems that are genuinely intelligent. Second wave AI, machine learning, even visions of third-wave none will lead to human-level intelligence and judgment, which have been honed over millennia. Recent advances in AI may be of epochal significance, but human intelligence is of a different order than even the most powerful calculative ability enabled by new computational capacities. Smith calls this AI ability “reckoning,” and argues that it does not lead to full human judgment—dispassionate, deliberative thought grounded in ethical commitment and responsible action.Taking judgment as the ultimate goal of intelligence, Smith examines the history of AI from its first-wave origins (“good old-fashioned AI,” or GOFAI) to such celebrated second-wave approaches as machine learning, paying particular attention to recent advances that have led to excitement, anxiety, and debate. He considers each AI technology's underlying assumptions, the conceptions of intelligence targeted at each stage, and the successes achieved so far. Smith unpacks the notion of intelligence itself—what sort humans have, and what sort AI aims at.

Smith worries that, impressed by AI's reckoning prowess, we will shift our expectations of human intelligence. What we should do, he argues, is learn to use AI for the reckoning tasks at which it excels while we strengthen our commitment to judgment, ethics, and the world.

178 pages, Kindle Edition

Published October 8, 2019

22 people are currently reading
240 people want to read

About the author

Brian Cantwell Smith

5 books7 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
13 (18%)
4 stars
33 (46%)
3 stars
15 (21%)
2 stars
10 (14%)
1 star
0 (0%)
Displaying 1 - 11 of 11 reviews
Profile Image for Michael.
264 reviews53 followers
May 29, 2020

This is an excellent philosophical treatment of AI. It cuts through the silly superintelligence debate and focuses on the real question of AI: what would it take for a system to be intelligent? Smith carefully constructs a theory of intelligence, rooted in his general theory of what the world is and how the mind relates to it. He argues that the key activity of the human mind is 'judgement', that the world is a 'plenum' of infinite richness and variety, and that 'intelligence' is the ability judge what is what despite the unsurmountable richness of reality.

Unlike more superficial thinkers like Nick Bostrom or Max Tegmark, Smith actually considers how AI systems are being constructed today, and considers whether they are capable of judgement. He faults the early AI systems for adopting a far too rigid and formal ontology, which could not account for the richness of the world. He faults modern deep learning systems for immersing themselves entirely in the flux of data, without the ability to hop out of the worldflow to consider alternative conceptualisations. These critiques are telling, and Smith is certainly right that without a revolution in AI engineering, true Artificial General Intelligence is currently nowhere in sight.

I would highly recommend this book to anyone interested in AI. It makes a fine philosophical counterpart to Melanie Mitchell's more practical Artificial Intelligence: A Guide for Thinking Humans. Smith's argument that contemporary AI systems lack engagement with the world is essentially equivalent to Mitchell's argument that contemporary AI systems haven't yet 'crashed the barrier of meaning'. Smith's points may or may not be particularly original, but his presentation of them is clear, concise, and thoughtful.

27 reviews2 followers
May 27, 2021
I've sat a lot with this book over the past month or so, trying to chew on some of the ideas. I read it instead of the slightly more technical and complicated On the Origin of Objects, but I think a lot of the ideas carry through here. Overall I found the ideas within interesting, thought-provoking, and some of them convincing, but I was a bit disappointed by the philosophical style. Many of the claims weren't well backed-up (or argued very well), e.g., sometimes the author would claim things like to have full judgement an entity needs to be committed to and care about the world. These sorts of claims didn't seem obvious to me and there was little in the way of arguing for them, they were more or less stated as fact. That aside, I think Smith makes very valuable contributions and at the very least reading this book has led me down some interesting avenues of thought in computation/cognitive science.

I take the main claims of this book to be as follows:
- 1) ontologies necessarily prioritize some information over others
- 2) there is no objective fact of the matter for the mesoscale world (i.e., the one with eggplants and trees -- the one we live in, not the atomic world)
- 3) therefore, preregistering an ontology is doomed
- 4) Also, currently machines can't differentiate between their representations of the world and the world itself (slightly different from the grounding problem)
- 5) In order to have true intelligence, it is necessary for an entity to know that it is living in a world, and that the world is separate from its representations of it.
- 6) Representations are not causally entangled with the thing they represent

There's a bit more here and there, but I think these are the main points (or maybe the ones I felt were the most interesting :p) The first three claims form a pretty strong argument (in my opinion) against symbolic AI. The first claim is saying that any representational scheme (i.e., ontology, i.e., map) necessarily leaves out information from the territory. So if you make a map of your local area you'll probably include things like where water is, how tall the hills are, etc., but you won't draw in where the flowers are, how gusty the wind is, how many ants are crawling around in your garden, etc. You don't even know all the stuff you could possibly include. This is a sort of crude example, but I think the general point stands: whenever you try to make a representation of something you necessarily leave out some information. If you didn't, you'd somehow have to represent the entirety of the world itself, which is infeasibly hard and maybe impossible.

The second claim is that there is no 'fact of the matter' about things at the scale which humans interact with. Say I'm quibbling with my friend about whether he's bald or not -- maybe we ought to count the number of hairs he has and make some judgement about it, but that's a subjective measure too. Maybe while we're quibbling one of my other friends looks really close and says that he does actually have hair it's just extremely short since it was just buzzed, etc, etc. Most of the time the categories we interact with seem stable and discrete, but this sort of glosses over the amount of work we're putting in to make it seem that way to ourselves. The world is infinitely rich in detail -- whenever you zoom in there's more to be found. I'm saying all of this, again, at the human scale. I feel more confused about how this relates to, e.g., quantum mechanics, but I'm not getting into that can of worms right now and I think it's not necessary for the argument anyways because we are trying to model AI after the sorts of concepts we ordinarily deal with.

The third claim is that because of these two points, both that ontologies necessarily leave out information and because there is no objective fact of the matter, there is no ultimate 'true' ontology. We cannot prespecify a representational capacity for a machine that captures everything ahead of time. Machines need to be fluid and flexible if they are to approximate anything like what humans do when interacting with the world. Thus, the old methods of symbolic AI (i.e., GOFAI) are doomed. He goes on to say that the dramatic success of second-wave AI (e.g., machine learning) is a testament to this idea -- that ontological flexibility (which ML exhibits) is paramount to intelligence. He still doesn't think ML is genuinely intelligent (which I'll get to later), but he does think that it has the basic idea of ontological flexibility is right. I basically agree with these three claims and that machine learning seems to capture something important about how humans represent the world. I still think that machine learning fails on this front a little bit (i.e., the data it relies on is sort of prespecified -- it has some ontological commitments baked into it).

Claims 4 and 5 are the crucial claims. Smith thinks that current systems cannot differentiate between their representations of the world and the world itself, and that this capacity is necessary for genuine intelligence (as opposed to say, 'reckoning', i.e., calculative ability). Claim 4 is not solved by grounding systems -- a system can be grounded (i.e., associate symbols with some part of the real world), but still take these representations at face value (i.e., assume that their representations are ground truth, instead of the world being ground truth). This seems right to me, at least, it seems plausible that this could happen in a system. Smith doesn't give a reason why current systems don't take the world as ground truth, although he points out that it's not at all obvious that they do. I can make up my own reasons why he might think this:

For one, current systems don't interact directly with the world -- they interact with our representations of it (e.g., images, vectorized language, etc). The obvious counterexample to this is robotics -- I'm not sure what Smith would say about that, probably that robots are grounded but that they still take their representations at face value. My best story about why this would be the case is that I think they probably still lack a certain kind of flexibility -- when something happens that's really far out of distribution, I don't think they can handle it or even model it (I'm speaking from very little knowledge about robotics, mostly just guessing). E.g., if a robot is programmed to fold towels and a bird lands on a towel, my guess is it just sort of breaks down -- it doesn't update its categories to include 'bird', or reevaluate its worldview -- it just sort of doesn't fold the towel as well or aborts the action sequence. (Again, I'm not sure about this exactly, but I do know that robots have trouble with, for instance, backgrounds -- a robot trained to fold towels against a white background won't do as well against a blue background. Many a Berkeley grad student has spent time taking these robots to different hotel rooms to train them to fold towels against different colored backgrounds. It's possible things have improved since then).This (the fact that their capacities break down when the world changes in unexpected ways) suggests that the robots are working on the level of what they can represent, and not what the world actually is.

Edit: After talking to a friend about this I now think this is plausible but irrelevant -- i.e., I think that there is a difference in kind in a system getting it's data from 'the world' (i.e., the territory) vs a system that's getting its data from a representation of the world (e.g., images). Representations always leave information out, so a system that only has access to a representation doesn't have the possibility of revealing new information from the representation that's not already there. But, it's clearly a continuum of how much information a representation can include, and humans are also bottlenecked by their sensory organs and the information that's capable of being transmitted that way. So I think that although this is important -- it's not a fundamental difference in machines and humans.

I am writing the rest of this now months later. The reasons why Smith thinks that taking the world (as opposed to representations) as ground truth is important to genuine intelligence are not entirely clear to me. It is in these sections that he talks a lot about 'commitment to the world' and the necessity of caring about the world and that machines need a sense of self. None of these claims were very well argued or backed up. They were more or less just stated as fact. My steelman of his position is similar to the out of distribution scenario above: humans are remarkably good at dealing with novel tasks and domains -- perhaps this is because we know to expect that the world might surprise us in unexpected ways and we are able to update on that. Current systems can't really handle out of distribution events -- self driving cars just fail if there's something that they haven't encountered is on the road. The ability to defer to the world seems related to the ability to update accordingly. But it also seems like a strange line to draw; certainly there are scenarios where humans would falter too -- we just rarely see those because we were evolved to live in this world and ML isn't, yet.
- Finally, he talks about this idea that representation isn't causal a lot. I go back and forth between thinking this is incredibly interesting or complete nonsense, which probably means it's interesting :p Basically, the representation in ones head, e.g., about Stone Henge or something, is not causally entangled with Stone Henge itself. There is no signal, no physical mechanism in the world that connects them. He thinks that this enables us to do all kinds of thought that wouldn't be available to us if we could only interface with our direct environment. This claim I totally buy and is interesting and cool. But then he goes on to talk about how you both cannot detect reference (aboutness) mechanistically (e.g., from outside the system) and that you can't detect it from inside the system (e.g. brain scans)!

I definitely agree that there are a lot of things you can't tell about a system from the outside, e.g., aboutness, representational schemes, utility functions, what have you. There is always a many to one relationship between the behavior of a system and its internal representation. However, unless you're claiming something non-physical, then the representation exists, physically, somewhere. It seems like the most obvious candidate for that is the system itself. But Smith says:

'The presence of judgment within a system is unlikely to be detectable at any level of implementation below the full system or personal level. It is unlikely ever to be realized by a structure or pattern of activity identifiable as such within the system's internal architectural configuration. Even less is it likely to have an effectively detectable neural or mechanical correlate (just as the presence or shape of true sentences within a logical system cannot be determined purely on the basis of syntactic shape). No fMRI of a person, that is, and no analogous description or diagram of the effective state of a synthetic system is ever likely to reveal whether the system has anything approaching reliable judgment.'

I'm very interested in this and curious if I missed something or if I am misinterpreting his stance. I would love to be able to detect representations and to talk about them more coherently.

Overall I am very sympathetic to Smith's claims, especially about ontology. They're important points, and not appreciated enough. However, I found some of his philosophical reasoning hard to follow/inconsistent/unclear/under-evidenced. I think the book suffers because of this, although it's overall very interesting and to my lights important. One quote that stuck with me is 'Distinctness flees, as realism increases.' I think that's a nice summary of his ontological points, and something that's really easy to miss in our everyday lives -- we think the world is nicely divided into the objects we interact with, but it is our minds doing that dividing, the world is not so straight-forward.
Profile Image for Mark Moon.
159 reviews129 followers
September 7, 2023
There are some good ideas here, especially about accountability and deference to the real world (and not just representations), and about the nature of semantic reference. These ideas aren't developed or defended in much detail in this very short book, though. The writing style leaves a lot to be desired: a huge fraction of the book is taken up by footnotes, which disrupts the reading process significantly.
50 reviews71 followers
December 6, 2019
Mildly interesting prescriptions on how we can get "reckoning systems" to do more "judgement" (the author simply defines judgement as the capability for "skin in the game" decision making and "reference to self as an object"). The author mentions the harshest failure of GOFAI as ontological fallacy of assuming the world is made of discrete, distinct concepts, and yet ironically presumes that intelligence can be bucketed into "reckoning" and "judgement". As a fan of the deep learning, end-to-end learning club, I respectfully disagree and believe that there is no ontological distinction of the two. There is only "insufficiently good reckoning to survive". On that front, I do agree with Smith that we need agents that have "skin in the game"/self-referential outcomes in order for them to attach better semantic meaning to the world.
Profile Image for Aleks Kudic.
Author 8 books1 follower
January 23, 2020
Hard going and complicated. Brevity of the book didn’t help. Nevertheless learnt a lot.
629 reviews175 followers
December 7, 2019
"Human level intelligence," in AI systems, write Brian Cantwell Smith, "requires 'getting up out' of internal representations and being committed to the word as world, in all its unutterable richness." It demands "fundamentally new ontological and epistemological frameworks--new resources with which to ask such ultimate questions as who we are, what to stand for, how to live."

Cantwell Smith proposes a number of fundamental tools for proceeding -- the distinction between reckoning (calculative power) and judgment ("deliberative thought, grounded in ethical commitment and responsible action"). He insists that this is not the same as the distinction between "rationality" and "emotion" -- and indeed that ceding logos to rationality and emotion to pathos would be catastrophic error.

Cantwell Smith's view is that the world in metaphysical fact is of unutterable richness, but that inevitable we "register" the things of the world into ontological categories (of which there is no finite number -- the value of any particular ontology must be assessed relative to the task it assigned to performed); these ontologies are always reductive. The fundamental problem with AI systems only know how to refer to the categories of ontologies, whether those be ontologies created a priori by humans (with all the biases that this introduces) or ones they derive themselves via inscrutable machine learning mechanisms, but can never assess their own ontologies relative to the infinitely rich outside world. At present, that capacity for assessing the value of any registration scheme (and the ontology it produces) relative to the world is only available to humans. And although Cantwell Smith leaves open the possibility that synthetic intelligences may be able one day to do that, he shows clearly that first-gen GOFAI AI was never able to do this, and that the current second wave is also not going to be able to do so.

Humans are always situated bodily and in the world, aware (more or less consciously) that our registration schemes form necessary but incomplete models of the world in its infinite richness, including its "nonconceptual content." GOFAI failed to see this, relying on a reductive view of the world that believed that the "world comes chopped up into neat, ontologically discrete objects," failing to appreciate "the inadequacy of formal ontology both as a basis for intelligence and as a model for what the world is actually like." (28) In fact, Cantwell Smith argues, human intelligence emerges "against an unconscious background, against an ineffable horizon of tacit knowledge and sense-making." This is why AI systems need millions of images and vast compute in order to learn how to identify a dog in a photo, whereas even an inexperienced child encountering a dog for the first time would only need a few before he could consistently make the correct identification. This is because even a small human will note the manifold richness that defines dogginess - motility, odor, size, color, emotionality, etc - such that even a few instances would allow the child to create the category reliably (but perhaps for edge cases, like coyotes or wolves).

Ultimately there are no clear and distinct categories: "beneath the level of the objects and properties that the conceptual representations represent, the world itself is permeated by arbitrarily much more thickly integrative connective detail." (34)

In the end, all this raises existential questions: "We will soon need to learn how to live in productive communion with synthetic intelligent creatures of our own (and ultimately their) design." (xix). "How will we want -- how should we want -- to live with other forms of intelligence?" (4)
Profile Image for Allan Olley.
299 reviews17 followers
November 4, 2019
This book is an attempt to consider some of the conceptual limits of current and historical approaches to Artificial Intelligence (AI) in the terms of academic philosophy and to a lesser extent cognitive science. An attempt to outline some of the limits of the approaches to AI on offer and to explain some of the failures. The basic thesis of the book is that the human ability to problem solve, understand the world and otherwise act intelligently is tied up in our complicated set of commitments to the world, our treatment of the world as a world. It is not merely about our bare manipulation of abstract tokens, reckoning, but deeper understanding and interaction, judgement, which involves moral and pragmatic attitudes about what we should care about and how we should interact with things.

The book deals with an obscure subject but is mostly pitched at a non-technical level. The complex ideas are mostly broad philosophical ones like just what reference to a thing rather than using its name or some other associated label is. This should make it possible for a variety of readers to get something out of the book. However it is light on technical details or specific examples, those looking for ideas about how particular implementations of problem solving, machine learning or the like work will be disappointed.

The ideas are necessarily speculative and also of somewhat limited import, while Smith does not see prospect for current AI projects to actually produce machines capable of feats of general intelligence he does not claim that such machine intelligence is in principle impossible. Even if his ideas in this book are right it would just mean that a successful project would in some way (intentional or not) instill judgement in the machine. His ideas are interesting and while it seems like a genuinely intelligent machine might well need to have the characteristics he suggests, it is not clear that they would not fall out of a successful implementation of some basic underlying structure of intelligence.

In any case the book broaches an interesting topic with some original ideas that are worth considering.
Profile Image for Malik Alic.
13 reviews
September 25, 2025
Overall, I’m disappointed in this work. The underlying constructivist assumption towards ontology is clearly visible and forces the author into making necessary, but rather abstract, notions of what it means to be intelligent.

He describes the world as the “One” with it being layers of abstraction, while humans only are able to decipher a slice of it (Descartes’ Dualism is waving hello). Furthermore - and this is what really drives me nuts - is that he references 4E cognition but disregards the role of the embodied being as a being in the world. All embodied creatures constitute their behavior in regards to the behavior of their survival. Of course, there are circumstances where this does not apply but Smith disregards this point entirely and focuses on the Ontology of objects.

The Subject-Object relation and distinction is key in regards to AI but in order for AI to be intelligent - or having judgement as he calls it - they not only have to be rational agents in the sense of Habermas but have to have a body as a backdrop.

While Smith disavows the hype of LLM’s and says that there is a profound difference between human and computational intelligence, he leaves a back door open for a disembodied AGI. This is exacerbated by the fact that the necessity of Dasein to intelligence is noted, but what this Dasein in this context means is not explored. This is a grave error and shows a chasm between theory and application.

At many points, it functions as a love letter to Haugeland and his ideas rather than as a standalone work of critical philosophy.

Some interesting ideas and a necessary critique of the current rise of utopian thinking of LLM’s becoming an independent AGI as our eschatological saving grace but very clear flaws are visible!
10 reviews3 followers
October 19, 2025
Excellent introduction to thinking carefully about a rigorous definition of intelligence and the current capabilities and limitations of AI systems including all of the LLMs and associated models that have been released since the book came out.

My overly brief summary of the most essential argument:
- A "registration" is an object that emerges for a creature and constitutes their real ontology. Intelligence relies on creating, adhering to, and updating registrations of the world.
- To continually and accurately register the world is the defining feature of human intelligence and we can call this "judgement".
- Judgement relies on deference to the world as such. To 'that which' underlies any particular ontology.
- Current AI systems rely entirely on existing registrations. Even when they make new registrations it is the result of registration input *not* deference to the world.
- Within registration input unbelievable computation, iteration, logical extrapolation etc. can occur. We can call this "reckoning". We see this with LLMs driverless cars etc.
- But it is not possible for such a system to create registrations in response to the world or importantly to have responsibility in the way we understand it.
- The more we defer to such systems the more we will lose our ability to defer to the world which has brought us to where we are today.
Profile Image for Maksym Shcherban.
76 reviews
November 21, 2023
This book does contain some interesting thoughts. But overall, it reads as a pompous pretentious pseudo-intellectual trash. Also, I am no physicist, but this sentence in parentheses alone is enough to drop the rating by 1 or 2 stars:

Moreover, because even objects to which you may seem causally connected, must, in order to be objects, have a past and future, both of which are beyond effective reach (physics prohibits direct causal connection with either past or future)


Oi, genius, that is practically the opposite of what physics actually says about causality.
Displaying 1 - 11 of 11 reviews

Can't find what you're looking for?

Get help and learn more about the design.