This book is a further contribution to the series Cambridge Studies in Philosophy and Biology. It is an ambitious attempt to explain the relationship between intelligence and environmental complexity, and in so doing to link philosophy of mind to more general issues about the relations between organisms and environments, and to the general pattern of "externalist" explanations. This is a highly original philosophical project that will appeal to a broad swath of philosophers, especially those working in the philosophy of biology, philosophy of mind, and epistemology.
I am currently Distinguished Professor of Philosophy at the Graduate Center, CUNY (City University of New York), and Professor of History and Philosophy of Science (half-time) at the University of Sydney.
I grew up in Sydney, Australia. My undergraduate degree is from the University of Sydney, and I have a PhD in philosophy from UC San Diego. I taught at Stanford University between 1991 and 2003, and then combined a half-time post at the Australian National University and a visiting position at Harvard for a few years. I moved to Harvard full-time and was Professor there from 2006 to 2011, before coming to the CUNY Graduate Center. I took up a half-time position in the HPS program at the University of Sydney in 2015.
My main research interests are in the philosophy of biology and the philosophy of mind. I also work on pragmatism (especially John Dewey), general philosophy of science, and some parts of metaphysics and epistemology. I’ve written four books, Complexity and the Function of Mind in Nature (Cambridge, 1996), Theory and Reality: An Introduction to the Philosophy of Science (Chicago, 2003), Darwinian Populations and Natural Selection (Oxford, 2009), which won the 2010 Lakatos Award, and Philosophy of Biology, released in 2014 by Princeton.
My photos and videos have appeared in the New York Times, National Geographic, Boston Globe, Boston Review, and elsewhere.
Awesome on rereading! So many things to pick up on. Really really great embodiment of how philosophy should be done.
Some notes
Lewontin’s five arguments for strong constructivism:
1. “Organisms select their environments”, meaning they choose which pockets of the physical world will effectively be their environments 2. “Organisms determine what is relevant”, meaning that what matters to this organism might not for that organism 3. “Organisms alter the external world as they interact with it”, an example being the oxygen-rich atmosphere on Earth 4. “Organisms transform the statistical structure of their environments”: size, longevity, mobility, as well as capacity for buffering and storage affect how uncertain, variable, and so on, an environment is perceived to be 5. “Organisms change the physical nature of signals that come to them from the external world”: the rate of vibration of certain molecules is transformed into the production of certain chemicals in the body
PGS groups these arguments under two senses of “construction”: a causal and a non-causal sense, e.g. (3), causal, vs (2), non-causal. Cases of statistical transformation like the one involved in (4), are more complicated, but PGS is inclined to view them as non-causal. To a degree, the debate between strong constructivists like Lewontin, and PGS, turns on whether the same sort of process is involved in adapting to the environment and adapting the environment to you—whether “adaptation” is reversible (see p. 146: “On the strongly constructivist view, construction can be achieved either by physically removing some feature, or by altering the organism's faculties and behavior so the feature is no longer relevant.”, my emphasis).
PGS wants to argue that there is a difference, for instance, between making a toxin inoffensive to you by inducing some internal changes to your physiology VS by removing the toxin. Sometimes, it looks as though PGS wanted to make this about changing intrinsic VS relational properties of things in the environment (esp. pp. 147-48, and 150-51 for the term “relational” and the comparison with social ontology). In the case of relevance, PGS argues that the distinction is to make it the case that the thing is relevant VS intervening in the world. The relevance or non-relevance of some thing leaves the world unchanged. It is in this sense, it seems, that the distinction between intrinsic and relational properties applies, which is also the distinction between world and environment: changing relevance profiles is to move the world/environment boundary to one side or the other. PGS’s phrase “making it the case that” suggests an update in the inventory of truths represented without a change in the actual state of the world, i.e., in my vernacular, a recruitment of some truths (mind-independently) in the representational content of the adaptive organism. PGS concludes §5.5 with these words: “when no change is made to any intrinsic properties of external things this is not construction of the environment.” Sometimes, however, PGS seems to indicate something more akin to primary and secondary qualities (“second-order properties” like the rabbit-dependence of environmental rabbit-friendliness, which he likens to colors, p. 150). Brandon (1990) distinguishes between three concepts of environment: there is the external environment (organism-independently defined, i.e. the world), the ecological environment, "which reflects features of the external environment that affect the organisms' contributions to population growth" (Brandon 1990: 82, quoted by PGS), and the selective environment, regarding which relative reproductive output differences in populations are measured. An environment that changes in a way that affects different populations’ reproductive outputs equally would not be selective. And Brandon further distinguishes between causal and constitutive, or synchronic and diachronic organism-environment relations respectively.
Strong-constructivist pushbacks: what are the mind-independent truths? What is the ‘world’? First, when relevance is accounted for, we can say that the environment is the set of ‘relevant’ external conditions (p. 153). But second, PGS wants to say both that complexity is an organism-independent property of environments, and that even irrelevant complexity is “still real” (p. 154).
In ch. 6, PGS argues that truth may be tied to language, and that the more basic concept is that of correspondence, but correspondence of what? According to teleosemantics, adrenalin flows are directed at the world in the right way to qualify as representations corresponding to the world, yet there is a sense in which they don’t represent anything, even though they were probably selected for enabling creatures to deal with fight-or-flight situations (pp. 177-78). In the case of Millikan, because anything can be a mapping so long as it is used in a way that affords success, it cannot function like correspondence, i.e. it cannot be appealed to in an explanation of why something works (‘because it map…’)—the mapping is post hoc (p. 186-87). In other words, correspondence and representation seem to result from success, and therefore cannot explain it (p. 192). The conclusion could be that explanations need to appeal to truth conditions (what adrenal flows are directed at), but not to representations (since adrenalin flows are not representations of anything), or that goings-on can be directed at the world, involve the world in some way, in the sense that they suppose, in some sense, that the world is in a certain way, without representing the world to be that way.
PGS’s idea is not that complexity breeds flexibility or cognition, but makes cognition necessary. However, it does not make it sufficient: for cognition thus solicited to be able to solve problems, there must be stable correlations in the world to exploit for problem-solving. However, the final move is to say: maybe cognition can engineer such correlations, so as to make the problem-solving possible, or easier. As Stephens (1991) notes, what is needed for successful cognition is both stability and unstability: stability in the links between inner/proximal states and outside/distal (“salient”, p. 220) conditions, unstability in the outside conditions themselves. Also, it is mostly when (i) between-generation predictability is low, so that to hard-wire the response in the genes is not be possible, and (ii) within-generation predictability is high, so that early experience can function as a good indication of what life is going to be like, that learning (viz. from early experience, precisely) is favored. See all the work by Caldji and colleagues on early stress. Early exposure to stress may also cause girls to hit puberty earlier: like shifting from K to r.
In ch. 8, PGS argues against the idea that a specialized module will do better on his domain that a general principle on that same domain (“the Jack of all trades” principles). There is no evidence for that. It should be noted that flexibility can be based on simpler devices than non-flexible ones, depending on the (often environmental) constraints: sex change in fish is an example of such “functional complexity”, as PGS calls it, devoid of structural complexity, and therefore of additional cost. Maintaining stasis can be costly. P. 249: we can ask, of a belief, both the probability that it’s true, given its tokening, and the probability of tokening, given truth. Both are related by Bayes theorem, which describes a symmetrical relation. “If your belief that p makes it more likely that p is true, then p being true makes it more likely that you will believe p.” (p. 250). Although the symmetry exists, pursuing both probabilities (truth|belief) and (belief|truth) is often practically impossible (ibid.).
The landscape of possibilities therefore appears wider than it seemed to me: you can have a strategy of not having many beliefs, but they’re all true, and you watch for or create circumstances in which to apply them, or you can have many beliefs, all or most of them true, and little to no need to watch for circumstances, but there is another parameter, which is that the more beliefs you have, the less certainty, and therefore, the second strategy requires “gambling with the possibility of error”. I have a signal. What is the threshold above which I respond by tokening a belief that what the signal indicates is real? It depends. What about the belief that what the signal indicates might be real (we sometimes act on such beliefs alone)?