This is a manifesto for radical embodied cognition (REC, if you don't like acronyms, you're in for a treat), not really a fully argued intellectual defense of it. REC is the idea that 'basic minds' are contentless: they have intentionality, but no intensionality, they don't represent the world, although their mental states are, in some sense, directed at it. H&M make it very clear they don't think all minds are basic, and that especially linguistic stuff (they really don't get much more specific than that) probably requires representations, but that's only because it's shaped by social practices in some sense (again, very vague).
While sympathetic, in broad outline, with H&M's orientations, and appreciative of the sort of tour of contemporary cogsci that their book sort of constitutes, as they explain why various research programs are doomed to fail, they did not convince me that REC was such a great thing. Here is why.
H&M's case for REC is all about the Hard Problem of Content: the Hard Problem of Content is the nut no other research program in cogsci has managed to crack. The Hard Problem of Content (ch. 4) is that there is no naturalistic notion of content ‘out there’. The idea that there is materializes itself most clearly in Dretske’s information theory, and also Russell’s idea that the world is made of propositions (not clearly distinguished from facts), and so that the content of a thought is a fact, is a part of the world itself (131-2, they quote Rowlands 2006 on this, and mention James’s criticism of Russell in his 1909). It is obvious contentfulness can’t be reduced to mere covariance: we need a notion of function, i.e. what what covaries with something (the ‘indicator’) was selected for covariating with, i.e. what it is meant to indicate. But yet again, something may have been selected for covariating, hence indicating, Xs, without representing Xs as such: contentfulness, explained in terms of teleofunctionalism, yields intentionality (i.e. directedness) without intensionality, i.e. it doesn’t enable the consumer to distinguish between the distal causes that set off his indicators. The suspicion that evolution doesn’t yield contents has been widely echoed in the literature (e.g. Stich 1990, Godfrey-Smith 2006, Burge 2010: 301). The other worry with teleofunctionalism (Dretske) or with teleosemantics (Millikan) is that content is all in the eye of the beholder, i.e. that content only appears through use, yet we seem to need a notion of unexploited content 1) if content is ‘out there’ 2) if we are to explain learning (this last argument was made by Cummins in several places, e.g. Cummins et al. 2006). To give an example of mine, I may have indicators that were selected for covariating with bears, that represent them as dangerous predators or big deadly moving things. But if content is interpretation-dependent, it is not ‘out there’. Why is that a problem for a naturalistic theory of content? I’m not sure. It is not the same thing to say that content is ‘out there’ (who says that?) and to say that information is out there, and then cooked to make content. H&M quote Millikan as saying that “the content of a representation is determined, in a very important part, by the systems that interpret it” (2005: 100) And they write:
“Taking an even stronger line on this would involve holding that the interpretative response does all the work. This would surrender any commitment to the idea that informational content exists independently of the activities of cognitive agents. What we call a Strengthened Millikan Maneuver is a promising strategy for other reasons too. It provides a clean way of avoiding the indeterminacy of content that plagues… accounts [like Dretske’s]…. Those who endorse the strengthened Millikan maneuver should speak of content-creating systems and not of content-consuming systems. That’s an important step in the right direction, but note, now, how much this account begins to look like the enactivist story we want to tell.” (75-6)
I don’t see the resemblance, though, since REC is against content, and teleosemantics explains how cognitive systems cook content with information… H&M don’t seem to have any substantial criticisms against that way of understanding content (i.e. not as naively located ‘out there’, but as interpretation-dependent). Actually they do: once we surmise that content is created by interpretation, we still have the indeterminacy problem. That’s a Dennettian argument, but what is shows is that we don’t know for sure what the content is, not that there is no content. Here, I have to say that Fodor’s multiple warnings about the dangers of confusing epistemological and ontological arguments seem to the point.
Back to my example with bears: of course, bears are all those things they are being represented as. Which brings up another problem: the contentfulness of perception as such, given that any of many different concepts might truthfully be brought to bear on the description of the experience. This is really an argument against conceptual content, or conceptualism. One of the assumptions of conceptualism, Gauker (2011: 60) says, is the Expressibility Assumption: “wherever there is conceptual content, there is the possibility of expressing that content in words—the words of a humanly possible language.”
“Suppose I am looking at a certain chair. It’s a Windsor chair. It has arms. It is made of wood. So the candidates for the sentences that might express the content of my perception include predicates of a wide variety of criss-crossing levels of generality, such as: That’s a chair! That’s a Windsor chair! That’s a wooden armchair! That’s a wooden piece of furniture!” (Gauker 2011: 62).
Of course, again, the chair is all of these things: each of these descriptions would be true of the perceptual experience. Gauker argues, on this basis, for a substitution of truth-conditional by accuracy-conditional content; the idea being that content needs to be answerable to some kind of normative constraint, but it need not be truth (see also Crane 2009). But it is not clear why the fact that any of many descriptions of varying levels of generality might apply to a given experience goes against the Expressibility Assumption: the Expressibility Assumption is not that the content of an experience should be uniquely expressible. Neither does the fact that the experience might be truly subsumed under many descriptions entail that the experience, or its content, is vague. It only appears so when expressed, because language presents as different (but equally true), in the guise of different descriptions, what is perceived as the same, or as one. However, the descriptions are not really different in the important, truth-conditional sense, since they are all true of what is perceived. And that is just the point, apparently: the experience, unique as it is, is nevertheless not uniquely expressible: “There is no conceptual content of perception to express.” (Gauker 2011: 64) The experience is not carved up at linguistic, propositional, or conceptual joints, or, as H&M put it, the content of perceptions is not the same kind of content (on this, read Heck 2007) as that of linguistic utterances, beliefs and judgements. H&M further comment on Gauker: “Thus, for [Gauker], the fact that it is possible to apply various concepts to perceptual experience fails to capture what is essential to the content of experience. That concepts can be brought to bear on the context [sic] of experience is ‘entirely incidental’ (p. 150).” (100) A few pages later, they also write:
“Nowadays it is widely agreed that nonconceptual content lacks the intensionality needed for there to be determinate truth conditions, and that only conceptual content has intensionality. If that is so, it rules out the possibility that nonconceptual content might be truth conditional. If perceptual content is nonconceptual, then it is not propositional. / Of course, it does not follow that perceiving is contentless if not all content need be truth conditional. (See Gunther 2003, p. 5-6.) However, if perceiving is to have content then it must have conditions of satisfaction of some kind. This is the most general and the most minimal requirement on the existence of content.” (102)
H&M suggest the strategy of saying that the content may be determined by the needs of the consumer (80), which doesn’t solve the problem (the consumer may need to avoid dangerous predators, not bears specifically, hence the need to encode bear-representations doesn’t arise), but there is another possible strategy: say that what the content is is determined by what is in the environment, so if the dangerous predators in the environment are bears, then a dangerous predators token represents bears; still, this doesn’t alleviate the worry that it doesn’t represent bears as such. So both strategies fail. H&M conclude that what teleosemantics can give us is intentionality without intensionality (i.e. without truth-conditional content). This ‘thin’ teleosemantic account would be called teleosemiotics.
As regards the move from truth- to accuracy-conditions, H&M don’t think it avoids the problems raised for truth-conditional content: Content is in the eye of the beholder, Evolution doesn’t care about truth/accuracy, etc. (112).
The last chapters are about the Extended Mind Hypothesis (a very boring topic) and phenomenal properties (an even more boring one), and they don't have any interesting things to say about those. In a sense, my interest peaked after ch. 4, and then I felt like the book was getting vaguer and vaguer in the last third.
Also, for a book that's *all about* content, it would have been interesting to explain if there is a difference between: this representation has content blah, and: this representation has a content than can be expressed as 'blah' (using a linguistic expression), say, because there is a match between the truth-conditions of 'blah' (give or take a few contextual restrictions) and the satisfaction-conditions of the representation. In short, what does it mean that we can (if indeed we can!) *express* content in such and such ways?
Finally, their criticisms about content and intentionality without intensionality reminded me A LOT of Dennett's self-avowed anti-essentialism about meaning and content. This is the guy who wrote:
“But notice that I said that when we adopt the intentional stance we have to know at least roughly how the agent picks out the objects of concern. Failing to notice this is a major source of confusion. We typically don't need to know exactly what way the agent conceives of his task. The intentional stance can usually tolerate a lot of slack, and that's a blessing, since the task of expressing exactly how the agent conceives of his task is misconceived, as pointless an exercise as reading poems in a book through a microscope. If the agent under examination doesn't conceive of its circumstances with the aid of a language capable of making certain distinctions, the superb resolving power of our language can't be harnessed directly to the task of expressing the particular thoughts, or ways of thinking, or varieties of sensitivity, of that agent. (Indirectly, however, language can be used to describe those particularities in whatever detail the theoretical context demands.)” (1996: 41)
Dennett's main point in this chapter of Kinds of Minds was to say that you can describe more than you can express: indeed, I don't think anyone but him ever commented as precisely as he did on the difference between expression and description. He was saying: the hell with propositions, the world isn't made out of them, but because he was not a realist about intentionality, because he had read too much Anscombe, he also believed that description (in propositional terms taken with a pinch of salt) was all that was needed, i.e. that there was no lived experience or thinking that risked being mischaracterized in the enterprise. I'm not as pessimistic, or dismissive, as Dennett about the possibility of finding ways, with (rather than in) language, to express, as accurately as possible, nonhuman thoughts or experiences, with attention to the limits and idiosyncrasies that are built in them.
Either way, H&M made me think of Dennett, and that made me think that there really are mostly two ways of doing phil of mind/cogsci: Block's and Dennett's, and, with all its shortcomings, I choose Dennett every time.