The Truth Behind Errors of Reasoning. Cognitive Fallacies as a Matter of Conceptual Coherency


Scientific Essay, 2015
21 Pages

Excerpt

Content

Summary

1. Introduction

2. The Metaphorical Foundation of Formal Rules of Derivation

3. Example: Belief Bias

4. Cognitive Representations of Syllogisms

5. Conclusion

6. Literature

Summary

Traditionally, research on cognitive fallacies has pursued a normative approach, mainly aiming at identifying the main influences that lead to erroneous response behavior such as, for example, belief bias and confirmation bias (Klayman 1995), both of which have been shown to correlate with strategies of argumentation and motivational factors (White/Brockett/Overstreet (1993); Wolfe/Britt (2008); Oswald/Grosjean (2004).

Many other related studies rely on the dual process theory, according to which the human cognitive system is based on modes of operation, very often simply termed intuition and reasoning (cf. Stanovich 1999; Evans/Over 1996; Evans 2012; Kahneman 2003, 2011). Contrary to folk psychology, a person’s susceptibility to premature conclusions, or the likelihood of being led astray by salient mindsets, doesn’t seem to correlate with a person’s general problem solving ability (Stanovich/West/Toplak 2013). Although it seems natural to view cognitive fallacies as a deficient form of reasoning or an effect of misplaced “gut instinct”, these explanations mostly neglect the fact that even persons who fall victim to logical fallacies usually consider their line of reasoning completely consistent with the general rules of deduction.

There is good reason to assume that so-called cognitive fallacies are actually a natural side effect of the attempt of the human mind to create a coherent scenario when the available input is ambiguous enough to allow for the construction of various conceptual metaphors to serve as a guiding mindset for the process of reasoning. Crucial in this process is the capability of the conceptual system to refer to and operate on concepts while postponing or even disregarding the determination of their semantical content. This idea is captured in Fauconnier’s (1994) Access Principle which, in simple terms, claims that an entity in one particular mental space (e.g. reality space) can be referred to in terms of the corresponding entity in a different mental space (e.g. built space) even if both entries differ semantically.

It will be shown that it might subjectively seem necessary to make flexible use of reference strategies based on mental spaces in order to integrate conflicting pieces of information into a coherent scenario. Evidently, the cognitive effort needed to determine and to apply the necessary strategies correlates negatively with the amount of relevant prior knowledge and general intelligence. This would allow for ascribing certain types of cognitive fallacies to very specific mental processes.

In the first part of this analysis, I give a brief overview of well-known reasoning errors and present several feasible approaches that are commonly believed to explain these errors.

The second part sketches the manner in which conceptual metaphors may be involved in the shaping and guiding of formal reasoning processes. The sticking point is the mapping of linguistically presented information onto spatial relations.

In the third part, I use a study conducted by Evans (2004) to show why standard explanations — which commonly argue for a temporary impairment of the reasoning faculty caused by seemingly unrelated factors (e.g. belief bias) — fail to account for the fact that some test persons obviously value believability of the presented information more than logical coherence.

The fourth part explains the plausibility of logical fallacies using insights from metaphor theory and mental space theory. The main idea is that the construction of mental spaces on which linguistic information can be adequately mapped is a rather complex and costly task. Particularly in those cases where the second alternative calls for conceptual reframing of the metaphorical basis, it might seem cognitively less expensive to map a TYPE (role) on a TYPE, or an INDIVIDUAL (value) on an INDIVIDUAL (value) rather than switching between the two modes, even if this would be formally stipulated.

In any case, I postulate that the construing of the given information is guided by the desire to establish a coherent scenario (making sense). As the capacity to (temporarily) process specific and nonspecific reference to an argument in a parallel fashion requires huge mental resources, individual differences in processing speed and short-term memory matter when it comes to maintaining the individual mapping paradigm under difficult circumstances — i.e. when quantifiers impede the referencing of arguments. From that perspective, cognitive fallacies appear to be instances of unrequested conceptual structures being consequently applied, rather than of a generally poor reasoning ability. This becomes obvious if one takes into account, that the appropriate, i.e. logically consistent line of deduction heavily relies on conceptual metaphors that are not necessarily accounted for by the context in which the problem is presented.

Keywords: metaphor, bias, mental space, cognitive fallacies, false conclusions

1. Introduction

It is a well-known fact that an individual’s ability to draw logically correct conclusions from given information does not only depend on the content but also on type of the assignment. In particular, it has been demonstrated that the correctness of syllogistic conclusions strongly correlates with the extent that the factual content of the respective assignment is considered subjectively credible (Evans 1983, 2002, 2004; Evans/Newstead/Byrne 1993; Manktelow 1999). Some authors (Stanovich 1999; Evans/Over 1996; Evans 2012; Kahneman 2003, 2011) have pointed out that the human cognitive system has two operating modes competing with each other for resources:

System 1: A fast-working system based on knowledge and experience functioning via associative learning. In this work mode, each problem leads to the reconstruction of an experience-based context, which then allows for embedding this problem.

System 2: A more slowly working, abstract-formal system that in general requires higher intelligence and is associated with greater cognitive effort.[1]

Although both systems are optimized for different problem situations, it is not a rare occurrence that formal syllogistic conclusions often are superimposed by inferences from reconstructed contexts and results are “distorted”. The classical experiment by Wason (Wason 1968; Wason/Johnson-Laird 1972) can illustrate such a cognitive interference. Test subjects were presented with four cards imprinted with symbols:

Fig. 1: Wason Selection Task

illustration not visible in this excerpt

At the same time, the subjects were told the following rule with regard to the imprinted symbols:

When one side of the cards is depicting an A, the other side will have a 3.

Subsequently, test subjects are supposed to pick two cards that they consider necessary to turn over in order to confirm this rule. Although the validity of a conditional A➔ B is equivalent to ⊣➔Bà⊣A and consequently can only be confirmed with the first and fourth card, just a small proportion of the test participants chose this possibility, while the majority chose the first and the third card. There are a number of explanations for this wrong conclusion: Klayman (1995) postulates the tendency to interpret given data in the light of one’s own hypothesis (confirmation bias or myside bias). This phenomenon could be established in different contexts, whereby the intensity of its expression was shaped in particular by argumentation strategy and motivational factors (White/Brockett/Overstreet. (1993); Wolfe/Britt (2008); Oswald/Grosjean (2004), but seemed independent of the overall cognitive abilities (Stanovich/West/Toplak 2013).

Based on their own experiments, Evans/Lynch (1973) draw the conclusion that the more salient information (A and 3) with regard to assignment structure is preferred in most every-day-situations. This is due to the lesser cognitive effort, respectively better efficiency of such strategies (matching bias, see also Evans 2003). Evans/Barston/Pollard (1983) documented that a higher order of textual plausibility of conclusions in test items would influence the test subjects to accept these even when logically inadmissible (belief bias). Moreover, Evans (2004, p. 142) speculates that the test subjects resort to a default strategy taken from everyday parlance and in analogy to it incorrectly interpret the conditional as bi-conditional also in classic syllogisms:

(1) If he is over 18 years of age, then he is entitled to vote.

However, everyday life definitely allows for a differentiated perspective and the interpretation of the conditional can vary depending on the context, since a structurally similar sentence as (2):

(2) If he is 18 years of age, then he is an adult.

results in a bi-conditional interpretation – obviously for the reason that in this case a knowledge-based strategy is applied. Evans (ibid.) concludes that such apparent mistakes are not so much a sign of “irrational thinking” but in fact a situation effect: For the vast majority of everyday situations, applying the knowledge-based operating mode is indeed the more efficient approach and therefore it has to be seen as perfectly rational to use it as the default strategy. This assessment confirms the assumption according to which cognitive mistakes should be analyzed less as the result of the faulty use of a correct problem-solving strategy, but rather as the correct use of an inadequate problem-solving strategy.

However, neither accepting two operating modes nor the confirmation or the belief base offer a refined explanation for cognitive mistakes of the type of the Wason experiment, since to a certain degree these hypotheses are nothing else but reformulations of a well-known fact that the known is more easily processed than the unknown. Hereby we fail to adequately analyze the term of logic-syllogistic thinking in order to determine the reason for the additional cognitive effort, respectively the lower propensity to engage accordingly.

2. The Metaphorical Foundation of Formal Rules of Derivation

In their work Where Mathematics Comes From, the authors Lakoff/Núñez bring the proof that numerous mathematical concepts are rooted in the physical experience of the human being. The same is applicable to the formal logical operations (Lakoff/Núñez 2000, p. 136f.): The formal logical thinking owes its broad functionality to the fact that basic spatial-physical terms can be projected onto the sphere of interpretations as conceptual metaphors (“terms are vessels”), which allows for translating categorical relationships into spatial relationships (ibid. p. 36).

Tab.1: Categorical and Spatial Relationships According to Lakoff/Núñez 2000

illustration not visible in this excerpt

These relationships can be very easily visualized in the form of Venn diagrams. Figure 2 represents Modus Ponens (x) and Modus Tollens (y) with container outlines, Fig. 3 two forms of hypothetical syllogisms. More complex representations can easily be construed via recursions.

Fig. 2: Modus Ponens and Modus Tollens

illustration not visible in this excerpt

Fig. 3: Two forms of hypothetical syllogisms

illustration not visible in this excerpt

For the following arguments it is important to keep in mind that the “logically correct” inference is understood as conclusions reflecting the facts, provided the container schemas are used adequately. In the following section we shall analyze a few examples of cognitive mistakes among container-schematic presentations. Hereby the working hypothesis consists of the idea that for error-prone syllogisms there are several alternative, possibly even conflicting forms of schematization. Should this be confirmed, then a second step would have to determine which cognitive strategies could lead to the choice of an inadequate form and what could motivate this choice.

The belief bias study by Evans/Barston/Pollard (1983) shall serve as an example for the analysis, because other suggestions like for example the confirmation bias, although at first sight emphasizing other analytical focal points, upon further investigation show very strong structural similarities to the belief bias[2]. As already mentioned, this is based on the observation that the operational modus that is experiential and optimized to everyday situations (system 1) often comes into play even when the problem requires a formal approach. Depending on form and content of the task, there may be rather the general plausibility of reasoning or a situation-specifically generated presupposition involved, which then theoretically can be assessed as belief bias, matching bias, or confirmation bias, respectively.

3. Example: Belief Bias

In a study by Evans/Barston/Pollard (cited by Evans 2004, p. 137), test subjects were presented a number of classic syllogisms consisting of two premises and a conclusion. The test subjects were supposed to decide if the given conclusion was permissible or not. An example taken from Evans (2004, p. 137):

(3) GIVEN

No millionaires are hard workers

Some rich people are hard workers

DOES IT FOLLOW THAT

Some millionaires are not rich people

YES NO

In order to test to what extent test subjects allow themselves to be influenced by their belief in real-life correctness when assessing the formal permissiveness of conclusions, or in other words, to what extent test subjects had referred to their experience-based operational modus while processing, the questions were mixed up according to the dimensions valid/invalid and believable/unbelievable, resulting in four categories of items:

Tab. 2: Proportion of Correct Answers as a Function of the Credibility of the Statement (see Evans 2004, p. 139)

illustration not visible in this excerpt

[...]


[1] This terminology is in line with Kahneman (2003).

[2] It could be demonstrated that the confirmation bias is in particular triggered by or confounded with external factors like motivational factors (Trope & Liberman 1996) and hypotheses test strategies (Klayman/Ha 1987).

Excerpt out of 21 pages

Details

Title
The Truth Behind Errors of Reasoning. Cognitive Fallacies as a Matter of Conceptual Coherency
Author
Year
2015
Pages
21
Catalog Number
V337345
ISBN (eBook)
9783656984870
ISBN (Book)
9783656984887
File size
622 KB
Language
English
Tags
cognitive fallacies, metaphor theory, mental spaces, errors of reasoning, logic
Quote paper
Patrick Kühnel (Author), 2015, The Truth Behind Errors of Reasoning. Cognitive Fallacies as a Matter of Conceptual Coherency, Munich, GRIN Verlag, https://www.grin.com/document/337345

Comments

  • No comments yet.
Read the ebook
Title: The Truth Behind Errors of Reasoning. Cognitive Fallacies as a Matter of Conceptual Coherency


Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free