Jump to ratings and reviews
Rate this book

Delusion and Self-Deception: Affective and Motivational Influences on Belief Formation

Rate this book
This collection of essays focuses on the interface between delusions and self-deception. As pathologies of belief, delusions and self-deception raise many of the same challenges for those seeking to understand them. Are delusions and self-deception entirely distinct phenomena, or might some forms of self-deception also qualify as delusional? To what extent might models of self-deception and delusion share common factors? In what ways do affect and motivation enter into normal belief-formation, and how might they be implicated in self-deception and delusion? The essays in this volume tackle these questions from both empirical and conceptual perspectives. Some contributors focus on the general question of how to locate self-deception and delusion within our taxonomy of psychological states. Some contributors ask whether particular delusions - such as the Capgras delusion or anosognosia for hemiplegia - might be explained by appeal to motivational and affective factors. And some contributors provide general models of motivated reasoning, against which theories of pathological belief-formation might be measured.

The volume will be of interest to cognitive scientists, clinicians, and philosophers interested in the nature of belief and the disturbances to which it is subject.

312 pages, Kindle Edition

First published January 1, 2008

21 people want to read

About the author

Tim Bayne

9 books8 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
2 (33%)
3 stars
3 (50%)
2 stars
1 (16%)
1 star
0 (0%)
Displaying 1 of 1 review
Profile Image for Alina.
394 reviews306 followers
February 25, 2022
This anthology offers a nice mixture of papers by philosophers and psychologists on the topics of delusion and self-deception. Traditionally, delusion is conceived of as not necessarily involving motivated thinking (e.g., someone desires that something be the case), whereas self-deception is. The majority of the chapters, unfortunately, are driven by one of two debates, both of which I do not think are very interested, as they could be resolved once we set up the appropriate framework. (This "review" is less of a summary of the contents of the chapters; they aren't very philosophically substantial but involve empirical findings and claims that are downstream of certain assumptions. Philosophical work would involve questioning those assumptions. So I'm rather going to summarize those assumptions below, and then ramble about my thoughts in response to them).

The first is the debate of whether a delusion is primarily due to a delusional experience, of which one forms appropriate beliefs; or whether to messed up reasoning or belief-forming processes, so that one could have a normal experience but end up with delusional belief. Much of the dispute between authors in this debate seems to stem from their lack of convergence in definitions of belief and experience and the relation between the two. What is a belief, what is experience, and how are they related? There's a folk psychological understanding of belief as a representation of a state of affairs that one endorses as true and that is formed on the basis of evidence. But there's also a functional understanding of belief as something that controls behavior; and on this definition, belief is not easily distinguishable from experience or may be an element constitutive of an experience (given that behavior is responsive to and controlled by experience).

There could be a substantive debate here. The debate would be whether a delusion is primarily result of whatever shows up to or occurs to a person spontaneously, vs. as a result of dysfunction in the ways a person consciously or deliberately responds to that which has showed up to them. ('That which shows up spontaneously' can include belief and experience alike, regardless of the differences in ways of defining each). For example, if someone has a paranoid delusion about one's partner having an affair, it could be due to it being inescapable for them to find themselves encountering an apparent reality where their partner is unfaithful, or due to their not encountering an apparent reality like this at all, but their having some bizarre desire to form the belief that their partner is unfaithful. When framed this way, it's more obvious that most delusions would not fall under the second case. When people have delusions, they are not cognitively working hard against the tide to arrive at a delusional picture of the world, when everything that shows up to them spontaneously is normal and counters this picture.

The second debate that I take to be ill-formed and that unfortunately sets the terms of research is about self-deception. The phrase self-deception invokes the idea that this phenomenon involves a person deceiving herself, analogously to how a person may deceive another person. Problems arise when we apply features of interpersonal deception onto understanding intrapersonal deception; interpersonal deception is intentional, involving one party consciously aiming to deceive the other. But how could this state hold within a person? If a person is trying to deceive herself into believe that P, this would require her to believe both P and not-P. Moreover, if she intentionally tries to deceive herself, she is aware of this strategy, and awareness of the nature and intent of the strategy ought to render it ineffective (e.g., if I know you're trying to deceive me into believing that P, and that you rather believe not-P, I wouldn't end up believing that P).

A lot of ink has been spilled in coming up with ways of addressing these problems. For example, Mele in chapter 3 "Self-deceptions and delusions" proposes that self-deception is not intentional; it simply involves a desire for P to be the case, which leads to a person being sensitized to evidence in favor of P, and being blindsighted to counterevidence. I worry that focus on addressing these problems distracts us from more interesting issues with it comes to self-deception. How is it possible to be compelled to act and feel in response to P, when we 'know' intellectually that P isn't the case? For example, my friend may be compelled to act and feel as if his recent novel was genuinely good, when he also 'knows' this is not the case.

When we frame the question this way, it brings us to consider that our typical concepts about the mind (not just in philosophy and psychology, but also in ordinary speech) may be inadequate for modeling our psychological reality. Take the concept of belief. We understand belief as aiming at truth, as formed on the basis of evidence, as representing some state of affairs, and as guiding action. But it is not uncommon at all for us to be in states that involve some of these features and lack others; we can be troubled by a "gap" between the mind and the heart, and wish for it to be bridged. What I think is the most philosophically interesting chapter in this anthology addresses this. Chapter 13 "Imagination, delusion, and self-deception" argues that delusions consist in states that are a mashup between imagination and belief, and self-deception consists in states that are between belief and desire.

This gets me to wonder whether there are alternative ways of going about explaining our psychological realities in terms other than belief, desire, and imagination. I'm thinking, tentatively, that there is something that shows up to us as part of reality; when it does, it necessarily arouses automatic emotional responses and readiness to behave in certain ways. But we can also consciously recognize this as not part of reality, but as delusional or result of self-deception, for example. When that happens, we represent to ourselves (in the format that would traditionally be described as imagination, belief, or thought perhaps) that this apparent reality is false, and that some alternative reality that wasn't the one that initially showed up is real.

When self-deception wins out, often it is because the contents of this representation fail to show up as part of reality, defined as necessarily arousing those emotional and behavioral responses. (Think of the contrast between imagining a zombie apocalypse is chasing you v. imagining a recent break-up; only the latter is part of reality. While the former might increase your heart rate and make you break out in a sweat, if you imagine it very vividly, still you are not compelled to take any actions, like calling the police, that you'd take if it did show up as part of reality).

There are many different causal factors that could be behind why the contents of a representation that one 'knows' to be true fail to show up as part of reality. For example, one could be lazy and refusing to think hard enough about it; sometimes when we think through all the causal consequences or the broader context of some fact, this gives it greater force or urgency. Or, for example, we simply lack any past experience or familiarity with the state of affairs represented; in this lack of familiarity, we lack habits and background knowledge that might be crucial for our forming appropriate emotional and behavioral responses. Or, for example, no one around us believes that this is true or even believe in the opposite (imagine someone who lives in a very religious community and fathoms something antithetical to the beliefs around her). There might be unconscious cognitive processes that make it the case that what we take others to believe will more likely show up as part of reality for us. (And notice that this is a range of processes that we would not take as typically leading to "belief"; we think belief is formed on the basis of considered reasons. And yet, these processes lead to that which controls our behaviors and emotions, perhaps more so than what we traditionally take to be belief.)

This is a very vague framework, and I'd like identify more precise causal entities and processes that figure into it. Of course this framework could be broken down into components of belief, imagination, desire, and other traditional psychological concepts. But when done so, each of these psychological attitudes would be seen as occupying a particular functional role, which removes the ambiguity of various roles ascribed to them in invoking them without this framework.

Another reason why I'd like to avoid using these traditional psychological concepts is that these are nowadays invoked under the context of cognitive psychology or philosophical view on the mind influenced by that. This tradition likes to think of the mind as analogous to a computer, where there are clearly delimited functional roles that could be filled in by different states and that are related to one another in clearly delimited ways. An issue with this is that it encourages a distorted view on the role of causal factors on experience that are not psychological states (e.g., values, interpersonal relationships, cultural and historical forces). To accommodate such factors, we'd need to imagine how they influence which states occupy which functional roles, or which contents they supply to these states -- in other words, the tradition of cognitive psychology hands us a particular vocabulary that limits the ways we can imagine how various forces in this world that are obviously relevant to shaping us figure into our lives.
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.