Table of contents
2. How can one betray oneself? - Proposed mechanisms
2.1. To reason or not to reason?
2.2. Biased hypothesis testing
2.3. Motivated perception
2.4. Triggers and Mediators
3. The reason behind the reasoning - Proposed motives
3.3. Self-esteem or self-maintainance?
3.2. Self-protection or self-enhancement?
3.3. Any motivation at all or just violation of expectation?
4. Motivated reasoning in economical and political contexts
5. Advantages and disadvantages of self-deception
6. Conclusion - Or "I had a dream"
“I want to tell you a secret. I see dead people. They don't see each other. They only see what they want to see. They don't know they're dead.”
From the movie “The Sixth Sense”
My grandma really loves her daughters. And she has got very lovable and capable daughters. My aunt, for example, always loses her jobs. But as my grandma states, it is just because she is overqualified. Furthermore, my aunt always hangs on chaotic men and has been divorced several times. But, of course, it has never been her fault. Finally, my aunt has some debts, but she can usually cope pretty well with money. At least that is what my grandma thinks. For others it may seem as if my aunt is not that capable. But certainly my grandma is very capable in one aspect: The aspect of motivated-reasoning and self-deception.
The term motivated-reasoning refers to a kind of self-regulation that enables people to believe in favorable things, although there is some severe evidence to disprove those beliefs. Motivated-reasoning is closely linked to terms like “wishful thinking” and “denial” and might lead to “unrealistic optimism” or “self-deception”. It especially occurs in situations that threaten one´s self-concept or previously held expectation of one´s future. Imagine for example, somebody is fired from a job, although that person believes him or herself to be an intelligent and capable employee. Or a man who believes he is happily married, but then his wife comes home smelling of someone else’s after-shave. Both persons are potential victims for motivated-reasoning, because motivated-reasoning enables them to keep their positive beliefs. The fired person – imagine she is a woman – might, for example, consider she was fired because the boss was sexist. And the cheated husband might convince himself that the smell of cologne is the result of his wife standing very close to another man on the tram.
Anyone who has seen the movie “The Sixth Sense” - or read the introductory quote - can guess how far motivated-reasoning and self-deception can go. Convincing oneself of being alive instead of being dead is, of course, a very uncommon topic of self-deception, nevertheless motivated-reasoning is a widespread-phenomenon. People have the ability – and often also the opportunity – to see a glass as being half full or half empty. Guess what they are doing! Right. Therefore, motivated-reasoning does not only occur when beliefs are threatened and people want to retain them, but also when they acquire them. Likewise, the beliefs of being happily married and of being intelligent may never have been realistic beliefs but instead flattered results of motivation.
But how exactly does motivated-reasoning function? How can one betray oneself? Are people aware of deceiving themselves and how far can they go in doing so? What kinds of motivation drive them? And perhaps most importantly, is it or is it not useful that people are able to see what they want to see?
2. How can one betray oneself? – Proposed mechanisms
“People can foresee the future only when it coincides with their own wishes, and the most grossly obvious facts can be ignored when they are unwelcome.”
George Orwell - 1945
Over the last 50 years there has been a broad debate about the question as to whether misconstructions of reality are due to cognitive short-comings or to emotional mechanisms (for review, see Markus & Zajonc 1985). The controversy has not been fully solved (e.g. Tetlock & Levi 1982), however, it is widely maintained that cognition can be initiated and directed by emotions and motivations, a process of so called hot cognition (e.g. Kruglanski 1983; Lodge & Taber 2005; McKay, Langdon, & Coltheart 2007; Pyszczsnski & Greenberg 1987; Westen, Blagov, Harenski, Kilts & Hamann, 2006). Nevertheless, the underlying processes of motivated-reasoning and self-deception might be explained by several special mechanisms, as well as in the context of several broader theories like balance theory (Heider 1958/1977), dissonance theory (for review, see Frey 1986) and attribution theory (e.g. Weiner 2005).
2.1. To reason or not to reason?
People are confronted daily with an overwhelming amount of information and a barrage of events that they somehow have to process cognitively. They have to decide what the information or event means and then have to evaluate how significant it is for them. But, as humans possess only limited cognitive capacity, not all events are processed to the same extent. If any information is positive and consistent with prior beliefs, it is processed without much engagement and accepted quite uncritically. But if something seems to be negative or surprising, people engage much more in processing the information to get some insight into the event (Kruglanski 1990; c.f. balance theory e.g. Markus & Zajonc 1985). Thus, the term motivated-reasoning refers first to the fact that people are motivated to reason more about undesirable and unexpected information than about desirable and expected information.
One might think this is adaptive, because when something is negative, there might be a risk and people will want to prepare for it. Or when something is surprising, people might think more about it to correct their prior beliefs. But as it turns out, often the opposite is true. People engage more in reasoning about negative events, because they have the goal of forgetting it (Baumeister & Chairns 1992) and holding onto their beliefs (e.g. Ditto & Lopez 1992; Chaiken, Liberman & Eagly 1989; Janis & Terwilliger 1962; Kunda 1987; Liberman & Chaiken 1992, Markus 1977). Therefore people look at the information skeptically. They hunt for flaws to dismiss its validity and then they search for plausible alternative interpretations and counter-arguments. Similar to the previously mentioned examples of being fired or being cheated on, such situations occur when people face negative feedback (e.g. Baumeister & Cairns 1992) and negative test results – e.g. on intelligence-tests (Wyer & Frey 1983), social-sensitivity-tests (Pyszczynski Greenberg & Holt 1985) or medical tests (Ditto et al. 2003).
The concept of motivated-reasoning as described presents a strong contrast to the concept of defensive inattention or selective exposure. The concept of defensive inattention proposes that people discard their attention from negative events and information, instead of directing even more attention to it (e.g. Janis & Terwilliger 1962; Brock & Balloun 1967). Two solutions are proposed for this contrast. Both state that people shift between the two self-deceptive strategies: First, people might avoid negative information if motivated-reasoning was not possible, or second, if it is not possible. The first solution suggests that people first engage in motivated-reasoning. But if it turns out that the evidence against their favored beliefs is very strong, people cannot refute the information, because in doing so they would lose the “illusion of objectivity” (Kunda 1990), which means that data and conclusion should be fitting (Heider 1958/1977). Therefore they continue with avoid paying attention to the topic (Ditto & Lopez 1992; Pyszczsnski & Greenberg 1987). The second solution goes quite the other way round and proposes that people generally tend to avoid negative information (Baumeister & Newman 1994; Wyer & Frey 1983). But as this is often not possible, because the information is very salient (like being fired) and/or because other people are also aware of the information, motivated-reasoning is the only escape. Notably, beside these two shifting ways, avoidance and motivated-reasoning can occur together: Although somebody might not be able to expose him- or herself selectively to the initial information/event such as being fired, he or she is able to search selectively for information that confirms the prior belief and dismisses the initial event (see below).
But however it may be, inattention or more attention, the crucial point is that the trigger for both are “hot” factors like motivation, affect, emotion, desire and so on. This can even be shown by fMRI-observations of the neural brain-activity (Westen et al. 2006). The reason is that people use their cognitive structures, like schemata (for review see Markus & Zajonc 1985), to evaluate information, and cognitive structures are probably never only cold-structures, but always affect loaded (Taber, Lodge & Glathar 2001). As in the case when the cheated man is confronted with his wife’s infidelity, the schema of his wife and his marriage will be activated. And as they are loaded with a positive affect due to the prior belief, the man will automatically try to evaluate the new information in a way that justifies the positive affect he perceives. But as he also perceives a negative affect due to the aftershave smell, he has to observe the situation more extensively or he might avoid it (c.f. dissonance theory on arousal, e.g. Zanna & Cooper 1974).
Nevertheless, it should be noted that cognitive structures may also have a “cold” influence on the amount of reasoning. If information is consistent with prior beliefs, people very quickly process it, because schemata for this kind of information exist (Frey 1986; Lodge & Taber 2005; Markus 1977; Markus & Zajonc 1985; Wyer & Frey 1986). Inconsistent information, on the contrary, needs more time to be processed and is more likely to be perceived dubiously (Pyszczynski et al. 1985) as previous schemata are hard to change (Marcus & Zajonc 1985). Inattention to negative information might be explained similarly: Precisely because there are no schemata or structures that fit the information, the information is given less or no attention, because it cannot be categorized (Markus 1977; Taylor & Brown 1986).
2.2. Biased hypothesis testing
So far it has been noted that people process negative information to a greater quantity (Ditto & Lopez 1992; Kruglanski 1983, 1990). But furthermore, they might process it with a different quality (Kunda 1987, 1990; Pyszczsnski & Greenberg 1987). Generally processing of information can be divided into several steps (Pyszczsnski & Greenberg 1987): To begin with, one has to generate a hypothesis that explains the event. Then, one has to generate an inference rule followed by the need to search for information to use the inference rule. Afterwards, the implication from the information should be drawn and one can accept the hypothesis or dismiss it. Or, one can go through the same process with other inference rules to evaluate the hypothesis more accurately.
Up to this point it would seem that people make quite rational judgments. And, actually it is often stated that lay people behave like “naïve” (Heider 1958/1977) or “intuitive scientists” (Baumeister & Newman 1994). But although the process itself seems to be quite rational, it might be motivationally biased in several ways to give events the meaning one wants them to have. The consequence is that people often behave rather like “intuitive lawyers” (Baumeister & Newman 1994).
Generating a hypothesis: The first point is that people usually generate hypothesis that are favorable for them (Mele 1997; Pyszczsnski & Greenberg 1987; c.f Wyer & Frey 1983). Referring back to the example of being fired, the woman will more likely generate the hypothesis that her boss was sexist than the hypothesis that she was fired due to her lack of intelligence. Notably, the hypothesis which is chosen also depends on the cold feature of availability. Although availability is a non-motivational factor, what is available might in turn be caused by motivation, because all knowledge has its roots in prior information processing (Markus & Zajonc 1985) and as stated also concerning creation of beliefs people engage in motivated-reasoning (Kunda 1987). The hypothesis of being fired due to a lack of intelligence is therefore probably not very palpable to the woman, because she might have dismissed negative feedback or interpreted ambiguous feedback in a favorable way (for review, see Dunning 2005) as well as she might have engaged in downward social comparisons (Pyszczynski, Greenberg & LaPrelle 1985). Or, as attribution theory states, she might have attributed success experiences to internal causes and failure experiences to external causes - like she now does with the firing and the sexist boss. And maybe she has even created the possibility of external explanations herself by self-handicapping ( e.g. Berglas & Jones 1978). Furthermore she might have attributed failure events to uncontrollable and unstable causes, while attributing success to controllable causes and stable character traits (e.g. Fiske & Neuberg 1987, 1990). And finally, she might have simply defined intelligence so that it fits the skills she has (Dunning, Meyerowitz & Holzberg 1989).
Generating inference rules: If a favorable hypothesis is chosen, the road to self-deception seems to no longer have blocks, because a person tends to choose rules that confirm their hypothesis (Pyszczsnski & Greenberg 1987, Chaiken et al. 1989).
Searching for information: The process of searching for information is also more like a speedway than a road-block for gaining a favorable judgment. The primary reason is that people focus on confirming information instead of disconfirming (e.g. Baumeister & Newman 1994; Kunda 1990). First, people do so by choosing a favorable information source - or at least they spend more time with favorable sources (e.g. Pyszczynski, Greenberg & Holt 1985). Second – and even more importantly - people screen the source for confirming information while they avoid disconfirming information (c.f. Mele 1997) – at least if it is not refutable or outweighed by supporting information (for review concerning dissonance theory, see Frey 1986; c.f. above avoidance versus reasoning). The consequence of this selective information-search is that people fall victim to confirmation bias. The bias could be increased by the fact that people may integrate pseudo-conforming information like co-occurrence (Miller & Ross 1975, Dunning 2005; Markus & Zajonc 1985) or confirming information they had actually created themselves by making use of self-fulfilling prophecies (Dunning 2005). Furthermore it might be increased, because confirming information is remembered better (e.g. Ross, McFarland & Fletcher 1981; Kihlstrom et al. 1988).