There is abundant evidence that most people, often in spite of their conscious beliefs, values and attitudes, have implicit biases. 'Implicit bias' is a term of art referring to evaluations of social groups that are largely outside conscious awareness or control. These evaluations are typically thought to involve associations between social groups and concepts or roles like 'violent,' 'lazy,' 'nurturing,' 'assertive,' 'scientist,' and so on. Such associationsresult at least in part from common stereotypes found in contemporary liberal societies about members of these groups.Implicit Bias and Philosophy brings the work of leading philosophers and psychologists together to explore core areas of psychological research on implicit (or unconscious) bias, as well as the ramifications of implicit bias for core areas of philosophy. Volume Metaphysics and Epistemology is comprised of two 'The Nature of Implicit Attitudes, Implicit Bias, and Stereotype Threat,' and 'Skepticism, Social Knowledge, and Rationality.' The first section contains chaptersexamining the relationship between implicit attitudes and 'dual process' models of the mind; the role of affect in the formation and change of implicit associations; the unity (or disunity) of implicit attitudes; whether implicit biases are mental states at all; and whether performances on stereotype-relevanttasks are automatic and unconscious or intentional and strategic. The second section contains chapters examining implicit bias and skepticism; the effects of implicit bias on scientific research; the accessibility of social stereotypes in epistemic environments; the effects of implicit bias on the self-perception of members of stigmatized social groups as rational agents; the role of gender stereotypes in philosophy; and the role of heuristics in biased reasoning.This volume can be read independently of, or in conjunction with, a second volume of essays, Volume Moral Responsibility, Structural Injustice, and Ethics, which explores the themes of moral responsibility in implicit bias, structural injustice in society, and strategies for implicit attitude change.
I came to this book looking for an alternative to the concept of "belief" for explaining the phenomenon of our taking something to be part of reality, at a pre-reflective or automatic level. An issue with the concept of "belief" is that in philosophy at least it has evolved primarily under epistemological concerns; so it comes with the connotations that it needs to be grounded in evidence, that it is operated with at a personal level, that it is context-independent and so can be implemented in inferential reasoning, among other features that are essential to the pursuit of knowledge. But it seems that much of what we take to be reality -- which governs our automatic emotions, behaviors, and thoughts -- do not necessarily have those features.
For example, my friend has a bad case of self-deception where he believes that P, he is happiest when most alone in the world, but he also "knows" that not P, that this isn't true and he is lonely in isolating himself. The concept of belief doesn't help us much with explaining this phenomenon, and in fact is counterproductive; relying on this concept forces our thought into paradoxes. For example, it makes us ask, how could one both believe that P and not-P simultaneously? If we discard the concept of belief, we become freed to explore such phenomena and notice interesting features of it. It seems that self-deception, for example, is highly context-dependent; my friend's state of being can be modeled by his belief that P only under certain moods and contexts, and there his state can't be modeled by his believing not-P at all, and vice-versa.
Implicit bias is a particular case of the more general phenomenon of implicit attitudes. Implicit attitudes refer to ways we evaluate or value the object of the attitude, where this evaluation isn't immediately accessible through introspection, isn't under our immediate control or deliberation, or is subpersonal through some other definition. Implicit biases are implicit attitudes we hold towards members of particular social groups. They are often understood under the context of dual-process models of cognition: There are two distinct systems that compose our cognition and that are defined by different operating principles. System 1 is fast, automatic, and non-conscious, and it operates according to associative principles; it is impulsive and links representations on the basis of similarity and contiguity. In contrast, System 2 is slow, controlled, and conscious, and operates according to propositional processes; it is sensitive to the truth-values of representations, and is inferential. We may contrast this to a single-process model of cognition, on which there is only one unified system of cognition, and apparent differences in mental states and attitudes are to be accounted for in terms of differences in how much control we have over the behavior that is prompted by a state or attitude.
I was most interested in the chapters under the "metaphysics" part of this 2-part anthology (which is more accurately described as "philosophy of mind"; these chapters focus on the architecture of the mind that explains how to make sense of implicit bias); I will summarize those chapters. In chapter 1.1, "Playing double: implicit bias, dual levels, and self-control" Frankish introduces his own dual-system model of cognition and shows how to understand implicit bias according to it. He understands explicit belief, in contrast to implicit bias, as a matter of commitment: When we explicitly believe that P, we are committed to having our behaviors and reasoning affirm the truth of P. This is not unlike a promise; we intend to honor the commitment and this requires that we notice and challenge implicit attitudes that stand in the way. So explicit belief can control behavior only via the implicit beliefs and desires which directly control behavior. We can use an explicit belief to modify our implicit beliefs. This view explains why under time pressure or cognitive burden we are more likely to act on the basis of implicit biases, compared to when we have the time and energy to retrieve our explicit beliefs and allow those to inform our behavior.
This implies that we have more control over our implicit biases, behaviors, and emotions the more we are committed to our explicit beliefs (i.e., the more we desire to uphold those explicit beliefs). This opens further questions for me: What allows us to increase our desire for an explicit belief? Does it have to be a matter of personal desire, or could an explicit belief be rather understood as an imagined state of affairs that we take to be real, and the more real it shows up to us phenomenologically, the more power it'd have to make implicit beliefs and desires adjust according to it?
In chapter 1.2 "Implicit bias, reinforcement learning, and scaffolded moral cognition" Huebner argues for a model of the mind as composed of three kinds of systems. He shows how each system contributes to what we take to be an implicit bias. The first system is Pavlovian; it is completely subpersonal and learns from brute associative processes, in response to biologically salient stimuli. The second system is associative and model-free; it assigns values to actions on the basis of previous outcomes of those actions. The third system is responsible for generating possible actions to take next.
The Pavlovian system presents to us objects as having certain values. Then the model-free system decides which object to pursue, on the basis of the values they have; usually it'll just follow what gave us positive outcomes last time. Then, if there is competition between different objects to pursue, we'll activate the forward-looking, model-based system to decide. Huebner argues that people with the strongest commitments to egalitarian values and most able to overcome implicit bias; when we hold temporary goals in mind, this can alter the value of a stimulus, and then when we act in accordance to that value, our baseline expectations of which outcomes follow from which actions get updated.
This views leaves me wondering: What is going on at a personal-level when we talk about these three distinct systems? Huebner implies that the second and third systems may be responsive to person-level deliberation and activity, but he doesn't go into that. By talking about matters in functionalist, subpersonal terms only, Huebner keeps silent on a number of important questions. Why is it that only certain personal-level activities are successful at changing the activity at the "Pavlovian" level -- in other words, why is it that only certain goals we entertain can shift the values of objects we're presented with? What other mental activities then entertaining goals can do this?
In chapter 1.3 "The heterogeneity of implicit bias" Holroyd and Sweetman argue that it may sometimes be misleading or unhelpful to talk about a unified category of implicit bias. Instead, there are different kinds of processes that get lumped together under this category. These different kinds are linked to behavior and to each other in different ways. So if we care about figuring out practical strategies for modifying unjust behavior, it is important to target only certain apparent implicit biases, the ones that are underpinned by the processes that are in fact hooked onto behavior. Particularly, implicit biases that involve a lot of emotion and negative evaluation of others are hooked onto behavior, while those that only associate people from a certain social group with certain in principle neutral characteristics (e.g., blacks and athletics) do not impact behavior.
In chapter 1.4 "De-freuding implicit attitudes" Machery argues that implicit attitudes should be understood not as mental states or processes, but rather as traits. Mental states are capable of being occurrent or dispositional and are retrievable. In contrast, traits are dispositions to behave toward and cognize about some object in a way that reflects some preference; they are not possibly occurrent at all.
I found chapters 1.1 and 1.2 most philosophically interesting. I can see there's a lot of usefulness for thinking about dual-process models of cognition for making sense of phenomena like self-deception, or other cases where we take something to be real so that it governs our automatic emotions and behaviors, and yet we "know" in some way it is not real. System 1 is automatic and drives emotion and behavior; so it perhaps be identified as the culprit behind making something show up as part of reality. My dissatisfaction with this approach is that talking about matters in strictly functionalist terms can blind us to identifying other processes or factors relevant to explaining this target phenomenon. Once we put things in functionalist terms, we're more likely to think about other potential functions or states of cognition that relate to a certain identified factor. In contrast, if we leave things at a personal-level or phenomenological terms, we're freed up to identify all sorts of processes or factors that might be relevant to it.
For example, I talk about the system 1 of my friend's cognition as processing the automatic evaluation that he is better off in social isolation. I then ask myself what accounts for system 1 to do this. I am tempted to identify further cognitive processes that explain how this particular evaluation may happen. In contrast, if I talk about my friend as having a (person-level) experience in which it seems to be most desirable and rational to him to self-isolate, this opens the questions of what other beliefs, desires, or past experiences he has to have led to this; what happens when he has thoughts or imaginings that portray the opposite of this experiential state; why those thoughts or imaginings might fail to challenge that state; etc. I may even then ask what are the potential features of a thought or imagining that give it power to challenge an experiential state; I can potentially discover new dimensions on which thoughts and imaginings may vary which explain a given one's degree of power, and then this discovery may inform me in making a conceptual innovation in my picture of the functionalist level of explanation. In other words, only in doing phenomenology can we challenge and update our background theory of mind, which determines how we think about the mind in subpersonalist terms.