An Analysis of Comprehension Problems based on Discourse Analysis and Relevance Theory

Field of Study: English as a Lingua Franca


Thesis (M.A.), 2010

232 Pages, Grade: 1,5


Excerpt


Content

1. Introduction

2. The development of ELF research

3. Understanding utterances
3.1. Dimensions of understanding
3.2. Code model
3.3. Grice’s theory of conversation
3.4. Relevance Theory
3.4.1. The cognitive and communicative principle of relevance
3.4.2. Explicature and implicature

4. Discourse and coherence
4.1. Text as the linguistic manifestation of discourse
4.2. The necessity of coherence for comprehension

5. Analyzing comprehension problems
5.1. The general nature of comprehension problems
5.2. An interdisciplinary model of comprehension problems
5.3. Gass/Varonis’ model of NNS miscommunication
5.4.1. Misunderstandings in ELF
5.4.2. Strategies for signalling misunderstandings
5.4. Models in ELF research

6. The Midwestern discussion study
6.1. Objectives and methodological approach
6.2. Participant profiles
6.3. Analysis of comprehension problems
6.3.1. Overview
6.3.2. Problems with form
6.3.3. Problems with meaning
6.3.4. Problems with coherence
6.3.5. Negotiation of communication
6.3.6. Influence of external factors
6.3.7. Limits of the approach

7. Conclusion

Appendix

Transcription conventions

Instruction for discussions

Appendix A: Discussion #1

Appendix B: Discussion # 2

Appendix C: Discussion # 3

Appendix D: Discussion # 4

1. Introduction

It is a linguistic reality of today’s world that English has become a lingua franca for the world. It is the global language of economy, technology, international politics, and the internet. English as the language of communication is not only used in NS-NS or NS-NNS interactions, but also on a large scale in NNS-NNS exchanges.[1] In roughly 80% of communication in English, no native speakers are involved (cf. Gnutzmann 2000). As Smith/Nelson (1985) state: “Native speakers are no longer the sole judges of what is intelligible in English” (p. 1). Only recently, scholars in applied linguistics have conducted research on the characteristics of the use of ELF (English as a lingua franca) in interactions between non-native speakers of English. For the purpose of ELF research, large corpora such as VOICE (Vienna-Oxford International Corpus of English) have been built up, containing data from naturally occurring ELF exchanges. Also at the University of Tubingen a corpus of this kind is currently being compiled: the Tubingen Midwestern Corpus, which contains group discussions on a given economic topic. Most of the discussion participants are international students with different first language backgrounds. Some discussions also include a native speaker of English. The data that is collected is used by graduate students for conducting research whose results they will present in a graduate course. Presentations frequently deal with the issue of comprehension, i.e. comprehension problems and its causes are being analysed. For this purpose, students independently conduct retrospective interviews with the discussion participants after the discussion took place. Based on my experience from student presentations in graduate courses, there has to date not been a uniform and systematic approach regarding the analysis of comprehension problems in Midwestern discussions. In this thesis I will design a structured set of questions for a standardized retrospective interview that serves to efficiently analyse comprehension problems. As the investigation of comprehension problems must go beyond an analysis of purely linguistic features of an interaction (syntax, morphology, phonology), situational circumstances in which the interaction takes place will also be taken into account. The use of a uniform and systematic set of interview questions will facilitate the comparison of research results. These standardized retrospective interview questions will be applied to four group discussions taken from the Midwestern corpus. For the analysis of the data collected from the retrospective interviews with participants from these discussions, Relevance Theory and discourse analysis are going to be applied. First, in chapter 2, a general overview of the development of ELF research so far will be provided. In the following chapters 3 and 4, as a theoretical foundation for the analysis of the data gained from the retrospective interviews, Sperber/Wilson’s (1995) Relevance Theory will be introduced, which is a theory of utterance interpretation, as well as Brown/Yule’s (1983) notion of coherence in discourse, which deals with the textual level of utterance interpretation. Chapter 5 focuses on Gass/Varonis’ model of NNS miscommunication and studies on comprehension in the field of ELF research. Finally, chapter 6 will elaborate on the methodological approach and empirical study.

2. The development of ELF research

In the wake of the global spread of the English language, the number of non-native speakers has exceeded the number of native speakers. Three out of every four users of English around the world use it as a second or foreign language (cf. Crystal 2003). Kashmir-born scholar Braj Kachru was the first scholar to bring order into the diversity of the existing World Englishes. In his famous “three circles” model, Kachru (1985) distinguishes between three types of English, categorizing its different uses in three circles: The inner circle contains those countries in which English is spoken as a native language, i.e. the United Kingdom as the actual origin of English, the USA, Canada, Australia, and New Zealand. The outer circle includes those countries in which English is spoken as a second language, existing side by side with strong indigenous languages and assuming an official function, e.g. India, Pakistan, Ghana, Nigeria, and Singapore. The expanding circle encompasses those countries in which English has no official function, but is acquired as a foreign language by formal education, e.g. Germany, Japan, Korea, Israel, and Indonesia. With this model, Kachru aims to emancipate the speakers of the non-native varieties of English in the outer circle from the linguistic authoritarianism of those who speak English as a native language in the inner circle. He postulates to view the many varieties of English in the outer circle as independent varieties with their own normative right and legitimization.

In research, communication in English between people from different first language backgrounds has recently been referred to as English as a lingua franca (ELF). As Seidlhofer (2005) points out, further terms used for the international use of English besides ELF are “English as an international language” (EIL), “English as a global language” (Gnutzmann 1999; Crystal 2003), “World English” (Brutt-Griffler 2002) and “English as a world language” (Mair 2003). Whereas many of these categories include speakers from outer circle as well as inner circle countries, Seidlhofer (2004) accentuates that the term “English as a lingua franca” relates to the use of English in interactions that solely include non-native speakers from the expanding circle. Seidlhofer considers ELF a new kind of English with its own characteristic linguistic features, referring to Firth’s (1996) definition of ELF as “a ‘contact language’ between persons who share neither a common native tongue nor a common (national) culture, and for whom English is the chosen foreign language of communication” (p. 240). At the same time, she does not deny that speaHMLSC_upam_BX34_EXP964-245c ip band nako karukers with a first or second language background of English are often involved in ELF interaction. However, it is the non-native speakers of English from the expanding circle who stand out as the actual agents when it comes to the global spread and development of English (cf. Brutt-Griffler 1998). In this context, Seidlhofer (2004) envisages ELF users to have both the potential and legitimization of shaping the development of the English language. Hence, just as Kachru (1985) demands the emancipation of the outer circle, Seidlhofer demands the emancipation of the expanding circle, as she sees ELF users not only as norm-depending, but also as norm-developing.

There has been descriptive research on ELF which according to Seidlhofer (2005) has lead to “a better understanding of ELF, which in turn is a prerequisite for taking informed decisions, especially in language policy and language teaching” (p. 2). So far research has been conducted at three major levels: phonology, pragmatics, and lexicogrammar.

At the level of phonology, Jenkins (2000, 2002) worked out a syllabus including phonetic- phonological features of Standard English that are crucial for mutual intelligibility between ELF speakers. She refers to this syllabus as “Lingua Franca Core” (LFC). For example, contrast between long and short vowels must be maintained, e.g. in “live” and “leave”. On the other hand, as Jenkins points out, there are certain phonological errors that do not hinder mutual intelligibility, e.g. some substitutions of the dental fricatives /0/ and /d/.

At the level of pragmatics, Firth (1996), Meierkord (1996) and Firth/Wagner (1997) identify the following two features of ELF interaction in their empirical research:

(i.) ELF interactants display a cooperative, supportive and consensus-oriented behaviour;
(ii.) ELF interactants attempt to “normalize” potential trouble sources rather than coping with them explicitly (e.g. via repair, reformulation, or other negotiating behaviour).

With regards to the second feature, Firth (1996) and Firth/Wagner (1997) notice that ELF interactants often do not engage in negotiated communication for handling problematic utterances. Instead they stay passive, relying on the solution of their comprehension problem in the course of the subsequent discourse. They consider such behaviour to be governed by a “Let-it-Pass” principle. Their empirical research is based on recorded business telephone calls involving the managers of two Danish international trading companies and their international clients. In contradiction to the finding in (i.), House (1999) observes that ELF speakers in her study behaved self-centered, in the way that they engaged in parallel monologues, ignored questions and abruptly changed topics. In one of her later studies, House (2002) observes that Asian ELF speakers demonstrated solidarity with their interlocutors, however she argued that this behaviour is superficial and only masks cultural differences.

At the level of lexicogrammar, there are large-scale ELF corpora that are currently being compiled, such as the Vienna-Oxford International Corpus of English (VOICE) and the Corpus of English as a Lingua Franca in Academic Settings (ELFA), both of which contain data drawn from spoken discourse. Whereas the former contains data from sources such as telephone conversations, group discussions and interviews, the latter contains recordings of university activities carried out in English, such as international conferences. Using data from the VOICE corpus, Seidlhofer (2001) remarks that observations about the lexicogrammar of ELF talk suggest that various features which have usually been regarded as learner errors are produced regularly by ELF users from many different first language backgrounds. For example, many ELF-speakers drop the third person present tense ending “-s” or mix-up the relative pronouns “who” and “which”. Such errors do not hinder comprehensibility.

3. Understanding utterances

3.1. Dimensions of understanding

When is communication successful? Intuitively, one would say that communication between a speaker and a hearer can be regarded successful whenever the hearer has understood an utterance that the speaker makes. But what exactly does it mean to understand? Milroy’s (1984) negative definition of communicative success provides a first clue: “As soon as there is a mismatch between the speaker’s intention and the hearer’s interpretation, the communicative success is threatened” (p. 8). Hence, understanding must be about successfully recognizing the speaker’s intention. Smith/Nelson (1985) specify three dimensions of understanding (cited in Pickering 1996:2):

1. ) Intelligibility, i.e. an utterance is intelligible if the hearer is able to recognize its individual words (level of phonology and feasibility);
2. ) Comprehensibility, i.e. an utterance is comprehensible if the hearer is able to understand the propositional meaning of it or its words in its given context (level of semantics); and
3. ) Interpretability, i.e. an utterance is interpretable if the hearer is able to understand the speaker’s intention behind it (level of pragmatics)[2].

In chapter 3.4, Relevance Theory is going to be introduced, which is a theory of utterance interpretation that emerged from the Gricean theory of conversation and deals with the question of how hearers arrive at the propositional and intended meaning of an utterance. Whereas Relevance Theory is concerned with utterance interpretation from a micro­perspective, there is also a macro-perspective that can be found in linguistic literature: Chapter 4 will elaborate on the textual level of utterance interpretation by introducing Brown/Yule’s (1983) treatment of coherence as the necessary precondition for the successful interpretation of linguistic messages.

illustration not visible in this excerpt

Until the emergence of Grice’s famous theory of conversation, the code model of communication had been prevailing in linguistic theory[3]. The assumption is that communication is a process of encoding and decoding: The sender A encodes his or her message in language and transmits it as a linguistic signal to the receiver B, who then decodes this message, ideally using the same code (Huang 2007:185). A code can be defined as a system of signs that have meaning. Codes may be words, whose meaning is explained by a dictionary. According to the model, human communication is successful if the sender and the receiver of the message share the same code, correctly apply knowledge of linguistic rules for encoding and decoding the message, and the transmission of the signal is not hindered by external factors such as noise. Furthermore, the code model presupposes that the intended meaning of an utterance solely depends on the code of an utterance, i.e. what is explicitly said linguistically. Contextual factors are not relevant for the successful decoding of a message.

The fact that context does not play a role in the code model of communication can be considered its main problem. Sperber/Wilson (1995) therefore argue “that it is descriptively inadequate: comprehension involves more than the decoding of a linguistic signal” (p. 6). This can be illustrated with the following utterance (taken from Huang 2007:185):

Advice given by the government during an outbreak of salmonella in the UK Fried eggs should be cooked properly and if there are frail or elderly people in the house, they should be hard-boiled.

Only by decoding, we are unable to make sense of what the personal pronoun “they” refers to in this sentence, as there are two possible antecedents, namely “eggs” and “frail or elderly people”. Hence, the code model falls short of accounting for the intended meaning of the utterance above, as it fails to explain contextual effects. Owing to the fact that contextual and real-world knowledge is crucial in the successful interpretation of utterance, the code model is not helpful for understanding how communication works. What is needed is a model that considers the necessity of drawing inferences when interpreting utterances. This is exactly what Paul Grice provided in the second half of the twentieth century: His inferential theory of conversation is a radical break with the code model.

3.3. Grice’s theory of conversation

Grice realized that what a speaker explicitly means often goes beyond what the uttered words encode. Ruling out the code model, he developed an inferential model to approach the analysis of human communication. This model considers communication to be based on inference, i.e. hearers need to infer (and not just decode) what speakers mean. In the fifties, Grice published an essay called “Meaning” in which he first came up with the recognition that communication involves speaker-meanings and inference (cf. Grice 1957). His theory of meaning set the general framework within which, until today, work in pragmatics has been carried out.

Grice’s theory of meaning provided the foundation for the inferential model. According to this theory, the meaning of an utterance is based on the intention the speaker has towards the audience. Communication then is about conveying and interpreting intentions. As Grice (1989) states:

‘“A meant something by x’ is (roughly) equivalent to ‘A intended the utterance of x to produce some effect in an audience by means of the recognition of this intention.’” (p. 220)

Speaker-meaning of an utterance is subdivided into an explicit level and an implicit level. The explicit level refers to what is said: It is made up by the conventional meaning of the utterance and its truth-conditional propositional content. The implicit level refers to what is implicated: Inference is needed to determine what is expressed implicitly in an utterance - opposed to what is said linguistically. Therefore, in order to successfully interpret the speaker’s utterance, the hearer must recognize its “implicature”. Grice suggests that implicatures must be seen as something distinct from what is said in an utterance. Implicatures are not evident at first glance, but rather need to be worked out by considering the context in which the utterance was made. Grice distinguishes between two types of implicatures:

An utterance entails a conversational implicature if the implicit meaning of this utterance cannot be derived from the lexical meaning of the utterance (i.e. what is said explicitly), but only from the context in which the utterance is being made. Grice (1975) provides the following examples:

“You are the cream in my coffee.” (p. 53)

Conversational implicature: You are a wonderful person that makes me happy.

By contrast, conventional implicatures emerge from words that do not contribute to what is said, but to what is meant. Grice (1975) points out to the following example that refers to a man called John Smith:

“He is an Englishman; he is, therefore, brave.” (p. 44)

Conventional implicature: The fact that John Smith is brave results from the fact that he is an Englishman.

Here, the conventional implicature is associated with the sentence connective “therefore” that yields an additional conveyed meaning. Conventional implicatures are being interpreted within the lexical domain of the utterance and do not depend on special contexts, as conversational implicatures do.

As has been shown so far, Grice had questioned the explanatory power of the code model by pointing out to the distinction between stating and implying in communication. He therefore introduced the term implicature, which stands for the additional implicit meaning of an utterance that the hearer infers from context. It should now be asked how the hearer can arrive at the conversational implicature of an utterance.

Grice (1975) developed a theory of conversation that deals with the question of what speakers and hearers do in terms of particular principles governing communication. His theoretical framework consists of two elements; namely, the cooperative principle and the four conversational maxims. Both are linked with one another: The cooperative principle can be considered the general principle that governs the four conversational maxims.

Cooperative Principle (CP)

In conversation participants try to make their contributions suitable to the shared purpose of the “talk exchange” that they are engaged in.

Grice distinguishes between four conversational maxims:

Maxim of Quantity

(1) Make your contribution as informative as is required (for the current purpose of the exchange).
(2) Do not make your contribution more informative than is required.

Maxim of Quality

Supermaxim: Try to make your contribution one that is true.

(1) Do not say what you believe to be false.
(2) Do not say that for which you lack adequate evidence.

Maxim of Relation

Be relevant.

Maxim of Manner Supermaxim: Be perspicuous.

(1) Avoid obscurity of expression.
(2) Avoid ambiguity.
(3) Be brief (avoid unnecessary prolixity).
(4) Be orderly.

These are all standards that according to Grice rational speakers try to comply with in ordinary conversations and that at the same time rational hearers expect to be respected. When and how then do implicatures arise in communication?

An utterance entails a conversational implicature if its explicit content does not conform to one or more conversational maxims. Moreover, for the implicature to be interpreted the hearer must assume that the Cooperative Principle was being observed by the speaker. This may be illustrated with the following exchange in which the maxim of relevance is being violated (taken from Yule 1996):

Rick: “Hey, coming to the wild party tonight?”

Tom: “My parents are visiting.”

Rick and Tom are both college students and friends. On the explicit level, Tom’s response is not relevant to Rick’s inquiry in form of a yes or no question. Nevertheless Rick will recognize that Tom means no, by firstly assuming that Tom is cooperative and secondly drawing on contextual world knowledge that when students like Tom have parents visiting they are going to spend the time with them instead with friends at parties.

3.4 Relevance Theory

3.4.1. The cognitive and communicative principle of relevance

Relevance Theory was developed by Dan Sperber and Deirdre Wilson and can be considered both a reaction to and further development of Grice’s pragmatic theory of communication. It is a theory of pragmatics that is located in cognitive science. In contrast to the classical code model which sees communication as a process of encoding and decoding linguistic meaning, it can be regarded as an inferential approach to pragmatics which goes beyond the mere decoding of the linguistic meaning of the words in an utterance as the source of understanding it. Rather, communication is achieved by expressing and recognizing communicative intentions. Hence, in the communicative exchange between the sender A and the receiver B, not only the lexical meaning of the words of A’s input affects B’s interpretation, but also the relevance of this input.

Sperber/Wilson (1981) criticize Grice’s selection of criteria that make up his conversational maxims. They argue that in manifold communicative situations, these maxims are unnecessary or even misleading (cf. Sperber/Wilson 1981:172ft). Therefore, Sperber/Wilson propose, "... that all the other maxims [can be] reduced to a single maxim of relevance which, by itself, makes clearer and more accurate predictions than the combined set of maxims succeeds in doing” (p. 174).

The reasoning in Relevance Theory is as follows: In communication, there is the assumption of relevance. This means that the information that the speaker provides is relevant in the sense that it helps the hearer work out the speaker’s intended meaning, using contextual knowledge[4]. Therefore, what is given literally is relevant for the hearer to work out the intention. If the speaker misjudges the hearer’s background knowledge, his or her information would not be relevant to the hearer.

Sperber/Wilson’s (1995) relevance theory is based on the Cognitive Principle of Relevance:

illustration not visible in this excerpt

Human cognition tends to be geared to the maximization of relevance.

This means that people preferentially pay attention to those things that are cognitively worth noticing in their environment. Using an example from everyday life, when we are driving a car, we consciously or subconsciously pay attention to certain things that are relevant, such as traffic lights and the car in front of us. What is relevance? Sperber/Wilson consider relevance a function of two factors, namely (a) cognitive (or contextual) effects and (b) processing efforts:

illustration not visible in this excerpt

a. Other things being equal, the greater the positive cognitive effects achieved by processing an input, the greater the relevance of the input to the individual at that time.
b. Other things being equal, the greater the processing effort expended, the lower the relevance of the input to the individual at that time.

Cognitive effects stand for the positive results in cognition that are gained by processing an utterance or other stimulus. Processing effort relates to the mental resources needed to process an input, including the intensity and the time that are required. Sperber/Wilson distinguish between three main types of cognitive effects that can result from the processing of new information (= input), namely those (i) strengthening an existing assumption, those (ii) contradicting and cancelling an existing assumption and those (iii) producing a new conclusion that is inferred from new information and existing assumptions together. The degree of relevance depends on the effort that is necessary for processing an input and the hereby resulting cognitive effect. The greater the cognitive effects of an input are, the more relevant the input is. The greater the processing effort is, the less relevant the input is. A maximum of cognitive effects combined with a minimum of processing effort yields a maximum of relevance.

As an illustration, Huang provides the following scenario (cf. Huang 2007:184ff): Xiamoning is a Chinese student that wants to rent a room from the Smiths, his potential landlords. He is asking them if they keep any cats. Xiamoning could receive the following three replies:

a. The Smiths have three cats.
b. Either the Smiths have three cats or the capital of China is not Beijing.
c. The Smiths have three pets.

According to the cognitive principle of relevance as stated above, all three replies are relevant to Xiaoming’s question, (a.) is more relevant than either (c) or (b). Why? On the one hand, (a.) is more relevant than (b.): (a.) and (b.) are logically equivalent and yield the same amount of cognitive effect. The processing effect, however, differs: Whereas (a.) is easy to process, (b.) requires the recipient to compute a second, false disjunct. On the other hand, (a.) is more relevant than (c.), because the cognitive effect that results from processing (a.) is greater than the one that results from processing (c.). The reason is that (a.) entails (c.) and therefore generates all the conclusions, among other things, that are deducible from (c.), together with the context.[5]

Whereas the cognitive principle of relevance deals with cognition in general, the Communicative Principle of Relevance, which is the second principle in relevance theory, deals with how cognition connects to communication. Sperber/Wilson’s (1986) formulate this principle as part of their model Ostensive-inferential Communication:

OSTENSIVE-INFERENTIAL COMMUNICATION

a. The informative intention

The intention to inform an audience of something.

b. The communicative intention

The intention to inform the audience of one’s informative intention.

The communicative principle of relevance is based on Grice’s original theory of meaning that states that in communication, the speaker conveys a certain meaning that must then be inferred by the recipient (see chapter 3.3). Thus communication is about expressing and recognizing intentions. According to the model of ostensive-inferential communication, the communicator follows two intentions; first to inform the recipient about something (informative intention), and second to make him or her realize this informative intention (communicative intention). The communicative intention may be expressed through any kind of ostensive behaviour, such as sniffing, sighing or groaning. An utterance is seen as an ostensive stimulus; i.e., it seeks to attract the recipient’s attention and at the same time, in accordance with the cognitive principle of relevance, makes the recipient assume that it is relevant for processing (cf. Sperber/Wilson 2004:611). This kind of relevance, which is rationally expected by the recipient, is called optimal relevance and is the focus of Sperber/Wilson’s communicative principle of relevance:

illustration not visible in this excerpt

Every ostensive stimulus conveys a presumption of its own optimal relevance.

Thus, the communicative principle of relevance is the generalization about what the hearer can expect from ostensively communicated information. The presumption of optimal relevance is specified as follows:

An ostensive stimulus is optimally relevant to an audience iff:

a) It is relevant enough to be worth the audience’s processing effort.
b) It is the most relevant one compatible with communicator’s abilities and preferences.
According to clause a), a stimulus is optimally relevant and therefore worth processing, if and only if this stimulus is more relevant than any other input that is available at the time when the stimulus is made. According to clause b), it is in the communicator’s interest to make his or her stimulus as easy as possible for the audience to understand. Also, the phrase “compatible with communicator’s abilities and preferences” in clause b) suggests that the communicator may be unable or unwilling to provide certain relevant information. Hence the ostensive stimulus is the most relevant one that the communicator is able and willing to produce (cf. Sperber/Wilson 1995:§3.3).

3.4.2. Explicature and implicature

Sperber/Wilson (1986, 1995) criticize that Grice focuses too much on what is implicated, but neglects the pragmatic treatment of what is said. Therefore, Sperber/Wilson pay more attention to the explicit content of an utterance by developing the notion of explicature. According to Blakemore (1992), explicature is “the result of fleshing out the semantic representation of an utterance” (p. 59). Formally speaking, explicating an utterance means gaining its propositional content by working out the incomplete logical form of the utterance and enriching it with contextual information. The logical form is derived from the linguistic expressions contained in the utterance. There are several processes by which an explicature is being worked out, e.g. disambiguation, reference solution, and free enrichment (also see chapter 3.3)[6]:

(1.) Disambiguation

Disambiguation is necessary if more than one interpretation of an utterance, based on its linguistic system, is possible. By choosing one of all interpretations possible, one can arrive at the explicature that completes the incomplete logical form. Example:

Flying planes can be dangerous.

a. The act of flying planes can be dangerous.
b. Planes that are flying can be dangerous.

As can be seen, two different explicatures, namely a.) and b.), can be selected from.

(2.) Reference solution

Reference solution is taking place when referents are attributed to referential expressions. It is recovered by both decoding and inference based on linguistic and non-linguistic aspects (e.g. tone of voice, word order, facial expression). Example:

a. John told Bill that he wanted to date his sister. Preferred interpretation: he = John, his = Bill’s
b. John told Bill that he couldn’t date his sister. Preferred interpretation: he = Bill, his = John’s

(3.) Free enrichment

Free enrichment is a pragmatically based process by which the logical form of an utterance is conceptually enriched in the explicature, i.e. what is said is being specified in order to recognize what the speaker must have meant, using world and context knowledge (cf. Carston

1988)[7]. In the following examples, (l.b), (2.b) and (3.b) are the explicatures obtained by free enrichment of the logical forms of the basic utterances in (l.a), (2.a) and (3.a):

1. ) a. John has a brain.
b. John has a scientific brain.

2. ) a. It’s snowing.
b. It’s snowing in Boston.

3. ) a. Everyone wore a new wool cardigan.
b. Everyone at Mary’s party wore a new cardigan.

Sperber/Wilson (2004) distinguish between basic explicatures and higher-level explicatures. The first is recovered by simply figuring out the propositional form of an utterance, possibly e.g. through disambiguation and/or reference solution. The latter is recovered by identifying the illocutionary force and propositional attitude of the utterance. An utterance may convey more than one higher-level explicature, as the following example shows (taken from Sperber/Wilson 2004:276):

Peter: “Will you pay back the money by Tuesday?”

Mary: “I will pay it back by then.”

Mary’s reply includes one basic explicature and two higher-level explicatures:

a. Mary will pay back the money by Tuesday, [basic explicature]
b. Mary is promising to pay back the money by Tuesday, [higher-level explicature]
c. Mary believes she will pay back the money by Tuesday, [higher-level explicature]

Whereas the higher-level explicature in (b.) refers to the speech act of promising that Mary performs, (c.) refers to the propositional attitude that Mary expresses, namely the attitude of believing.

Whereas explicatures relate to the explicit content of an utterance and can be derived by both decoding and inferring, implicatures relate to the implicit content and can only be recovered by pragmatic inference. For dealing with implicatures within Relevance Theory, Huang (2007) suggests the use of the term r-implicature in order to avoid confusion with Grice’s concept of (conversational) implicature (Huang 2007:195). Henceforth this term will be used in the following analysis of implicatures within the framework of Relevance Theory.

Sperber/Wilson (2004) distinguish between two kinds of r-implicature: implicated premises and implicated conclusions. An implicated premise is “an appropriate hypothesis about the intended contextual assumptions”, whereas an implicated conclusion is considered “an appropriate hypothesis about the intended contextual implication” (p. 262). This distinction can be illustrated with the following example (taken from Huang 2007:195):

Car salesman: “Are you interested in test-driving a Rolls Royce?”

John: “I’m afraid I’m not interested in test-driving any expensive car.”

The following r-implicatures may be recovered by John’s reply:

a. A Rolls Royce is an expensive car
b. John isn’t interested in test-driving a Rolls Royce

(a.) is an implicated premise as it contains the contextual assumption that a Rolls Royce is an expensive car. (b.) is the implicated conclusion that follows from the combination of the implicated premise (a.) with the proposition that he is not interested in test-driving any expensive car.[8]

A further distinction that can be drawn about r-implicatures is that between strong and weak r-implicatures. Whereas strong implicatures are indispensable for understanding the speaker’s intended meaning, weak implicatures are not, because these may be one of a vast array of equally possible r-implicatures. Sperber/Wilson (2004) provide the following example (taken from Sperber/Wilson 2004:265):

Peter: “Did John pay back the money he owed you?”

Mary: “No. He forgot to go to the bank.”

Mary’s reply, “He forgot to go to the bank,” yields the following r-implicatures:

a. John was unable to repay Mary the money he owes because he forgot to go to the financial institution.

b. John may repay Mary the money he owes the next time he goes to the financial institution.

(a.) is a strong r-implicature, as otherwise Mary’s reply would be irrelevant, (b.) is a weak r- implicature that in addition may be derived from Mary’s reply. It is weak as its derivation is not crucial for recognizing Mary’s intended meaning.

4. Discourse and coherence

4.1. Text as the linguistic manifestation of discourse

What is discourse? Discourse analysis deals with the question of how language is used in a certain context to express intention. Brown/Yule (1983) start the first chapter of their book “Discourse Analysis” with the following statement: “The analysis of discourse is the analysis of language in use” (p. 1). They were the first to view discourse analysis from a pragmatic perspective which looks behind the linguistic clues that are given in an utterance and considers aspects of what is unsaid or unwritten, but still communicated within the discourse. Before, a structural perspective of discourse was dominant: Harris (1951) was concerned with structural equivalences within a text, describing linguistic forms as static objects. He was the first to coin the concept of “discourse” and did not make a distinction between “discourse” and “text”. Zellig’s structure-focused model was in line with the linguistic theory of structuralism of that time.

Widdowson (2004) criticizes Harris’ (1951) definition of discourse analysis as the study of language patterns above the sentence. According to Widdowson, one can also have discourse below the sentence. He argues that expressions like “TRESPASSERS WILL BE PROSECTUED” written on a sign, may be not a text based on the Chafe criterion,[9] but nevertheless are intuitively textual in that it is a complete communicative unit (p. 7). As a further example, Widdowson cites parking signs that contain the single letter “P” which serves the communicative purpose of telling drivers where to park their cars. He concludes that it is not the size of the linguistic unit, but rather the circumstances in which they are used that determine textuality. He defines text not by its linguistic extent but by its social intent. Just like Brown/Yule (1983), Widdowson has a pragmatic perspective of discourse analysis. He defines discourse as “the pragmatic process of meaning negotiation”, i.e. discourse is what we make of a text in communication (p. 8). Text is the product of discourse, whereby “the resources of the language code are used to engage with the context of beliefs, values and assumptions that constitute the user’s social and individual reality” (p. 14).

4.2. The necessity of coherence for comprehension

Harris (1951) considered a sentence the largest unit of language. Later, however, linguistics became more interested in larger units than just sentences. In their work “Cohesion in English”, Halliday/Hasan (1976) deal with the question of what makes a text a text. They state that sentence and text are two different units: whereas the former is a structural and grammatical unit, the latter is a “a unit of language of use,” specifically a semantic unit (p. 1). They consider a text “any passage, spoken or written, of whatever length, that does form a unified whole” (p. 1). Something that is considered a text intrinsically must have texture. Texture can be defined as “the property of‘being a text’” (p. 2). Texture is constituted through cohesion, referring to the relations of meaning that exist within the text. It is also synonymous with the term “discourse structure.” In the following example, taken from Halliday/Hasan (1976:2), the personal pronoun “them” and the noun phrase “six cooking apples” constitute a cohesive tie, as the pronoun semantically refers to the noun phrase:

Wash and core six cooking apples. Put them into a fireproof dish.

It can be distinguished between five types of cohesion: reference, substitution, ellipsis, lexical cohesion, and conjunction. Regarding the latter category, explicit markers of conjunction are sentence connectives such as “furthermore” (additive conjunction), “but” (adversative c.), “consequently” (causal c.), and “after that” (temporal c.).

Halliday/Hasan state that cohesive ties are “the only source of texture” (p. 9). Brown/Yule (1983) oppose this view by creating a model of discourse that is centered on the perception of the recipient: “Texts are what hearers and readers treat as texts” (p. 199). In this model, the mere presence of cohesion is not sufficient for something to be called a text. Brown/Yule’s approach is based on the concept of discourse representation. This term refers to someone’s mental representation of what exists in the world. Thus when a speaker or writer produces a piece of discourse, its interpretation by the recipient will be based on his or her individual representation of the world. Moreover, with each new piece of information that is provided in this discourse, the recipient’s mental representation will be changed.

Brown/Yule argue that a sequence of sentences does not qualify as a text only because it contains cohesive markers, as has been mentioned above. Brown/Yule provide evidence with the following example:

“I bought a Ford. A car in which President Wilson rode down the Champs Elysees was black. Black English has been widely discussed. The discussions between the presidents ended last week. A week has seven days. Every day I feed my cat. Cats have four legs. The cat is on the mat. Mat has three letters.” (p. 197)

This passage contains several cohesive ties such as “Ford - car” and “black - Black”. Hence this passage seems to have cohesion. Nevertheless, a reader would never call this passage a text as it lacks one crucial feature: coherence, which refers to the semantic connections within a text (implicit or explicit) that are established by the hearer or reader, depending on his/her socio-cultural knowledge and the situation in which the text is being produced and interpreted. The passage above is not coherent, as its sentences are linked arbitrarily, and not logically. The reader cannot make sense of the overall message of this passage. Whereas a collection of sentences can have cohesion, but no coherence, conversely it can have coherence, but no cohesion. Brown/Yule illustrate this with the following example taken from Widdowson (1978:29):

illustration not visible in this excerpt

B: “I’m in the bath.’

A: “Ok.’

There are no cohesive link in this communicative exchange between A and B. Nevertheless, we as the reader would treat this dialogue as text, because we still understand these utterances. By saying that he is in the bath, B wants to convey the message to A that he will not answer the telephone that A is talking about. Hence there is a semantic relationship between these two utterances. Brown/Yule conclude that cohesive ties are not necessary for creating texture and do not guarantee texture when coherence is missing.

Brown/Yule state that the assumption of coherence is the “normal expectation” in communication. There are an apparent unlimited number of features of context that can be considered in the interpretation of discourse. Brown/Yule, for example, point out to Hymes’ (1964) list of features of context, including “addressor/addressee”, “topic”, and “setting” (where and when the event takes place). Regarding the wide range of contextual features, the question arises which of these features have a bearing on the act of interpretation by the hearer? Brown/Yule propose two principles of interpretation which enable the hearer to arrive at a relevant and reasonable interpretation of an utterance or word within a certain context: 1.) Principle of Local Interpretation

According to this principle, the hearer does not construct a context any larger than he needs to arrive at an interpretation. Brown/Yule provide the following example:

“Similarly if his host says ‘Come early’, having just invited him for eight o’clock, he will interpret ‘early’ with respect to the last-mentioned time, rather than to some previously mentioned time.” (p. 59)

Brown/Yule’s principle of local interpretation is supported by Cruse’s (2000) statement about the process of contextual enrichment. Cruse points out that the hearer makes a search through possible domains in order to arrive at an interpretation that satisfies him, roughly in the following order:

(i) the immediate preceding discourse (strictly within short-term memory)
(ii) the immediate situation (currently available on the sense)
(iii) the broader situation
(iv) memory/general (or mutual knowledge)

2.) Principle of Analogy

According to this principle, the interpretation is based on the recipients individual experience of similar events. Recipients assume that everything will remain as it was before unless they are given specific notice that something has changed. Consider the following sequence:

The baby cried.

The mother picked it up.

Based on experience (and on the principle of local interpretation), the reader will understand the two actions contained in this sequence as happening at the same time at the same place. As Brown/Yule point out, it is “the experience of similar events which enables [the hearer and reader] to judge what the purpose of an utterance might be” (p. 61).

According to Brown/Yule, the principle of local interpretation and the principle of analogy “form the basis of the assumption of coherence in our experience of life in general” (Brown/Yule 2003: p. 67). Furthermore, the recipients of an utterance arrive at the understanding of its message by 1.) realizing its communicative function, 2.) activating background knowledge, and 3.) making inferences, as Brown/Yule point out:

(1.) The communicative function of the message

We arrive at the understanding of linguistic messages by recognising the action that is communicated through an utterance and the reason for it that is expressed in this utterance. As Brown/Yule state: “The action, and the reason for it, are to be identified by virtue of their location within a conventional structure of spoken interaction. This conventional structure provides an account of how some utterances which are apparently unconnected in formal terms (lack cohesion) may be interpreted within a particular genre of spoken interaction, say conversation, as forming a coherent sequence.” (Brown/Yule 1983:228). Thus, Widdowson’s example on page 21 above constitutes a coherent discourse as a conventionally structured sequence of actions can be identified (Brown/Yule 1983:228):

A: “That’s the telephone.”

B: “I’m in the bath.’

A: “Ok.”

A requests B to perform action

- B states reason why he cannot comply with request

A undertakes to perform action

In this context, Brown/Yule cite Austin’s (1962) speech act theory as a useful framework for analysis as it has the potential to account for the fact that utterances that are formally unconnected go together to build a coherent sequence. According to Austin, we form an utterance with a certain function in mind. For example, a landowner who has written “TRESPASSERS WILL BE PROSECUTED” on a sign hereby issues a warning. Thus, he performed some act. Austin calls this act an illocutionary act which is performed through the communicative force of the utterance. Utterances can be used for making statements, offers, requests, warnings, predictions and other communicative purposes.

The recipient’s general socio-cultural knowledge

The interpretation of discourse to a large extent depends on the pre-existing general socio­cultural knowledge we have of the world. It can be distinguished between general background

knowledge and experiences gained from the past. Also, we have knowledge of the actual discourse context, i.e. the situation and previous utterances that influence our understanding of the (upcoming) discourse content. In a certain discourse situation, we basically only activate that limited subset of our knowledge that is required for understanding it. As Brown/Yule put it, “[understanding discourse is (...) essentially a process of retrieving stored information from memory and relating it to the encountered discourse” (p. 236). Brown/Yule focus on two approaches from artificial intelligence (AI) research that describe the way of how our knowledge is organised in memory:

The first approach is Minsky’s (1975) frame-theory, according to which the knowledge in our memory is stored in the form of data structure; in other words, frames represent fixed stereotyped situations (Brown/Yule 1983:238). Consequently, when one experiences a new situation, he or she activates from memory a frame that fits the situation. The notion of frame as stereotypical knowledge in one’s memory can be applied to thinking about discourse understanding, namely by saying that the recipient processes and eventually understands new information provided by utterances by relating it to pre-existing stereotypic information, i.e. a certain frame.

The second approach is Schank’s (1972) theory of conceptual dependency, later incorporated by the concept of scripts, according to which our understanding of what we read or hear is very much “expectation-based” (Brown/Yule 1983:242). Riesbeck/Schank illustrate this assumption by providing the following example (Riesbeck/Schank 1978:252 as cited in Brown/Yule 1983):

John’s car crashed into a guard-rail.

When the ambulance came, it took John to the x.

Reading this example, we have a strong expectation about what, conceptually, will be in the x- position. Different lexical realisations can be thought of, e.g. hospital, doctor, medical centre, etc. With regards to the analysis of stories, Riesback/Schank introduce the concept of “script”, a general understanding device that incorporates “a standard sequence of events that describes a situation” (Riesback/Schank 1978:254 as cited in Brown/Yule 1983:243). For example, when we read a newspaper story about a car accident, we cognitively retrieve an event-sequence that is stereotypic of newspaper stories about car accidents and that guides our understanding of the story. Whereas the concept of frame refers to a stable set of facts, the concept of script refers to the sequence of facts as envisioned in a certain situation.

In opposition to the view of discourse presentation being based on stored stereotypic knowledge, Brown/Yule present Johnson-Laird’s (1981) proposal that understanding is achieved through the construction of mental models. He argues that understanding does not take place via the decomposition of word-meaning, which, for example, is part of Schank’s (1972) theory of conceptual dependency. By contrast, the recipient arrives at understanding by building up a mental model of a particular state of affairs in which the relevant events and entities are represented (Brown/Yule 1983:251). Thus a mental model is a representation of the way the world is. Johnson-Laird points out that “in so far as natural language relates to the world, it does so through the mind’s innate ability to construct models of reality” (Johnson- Laird 1981:141 as cited in Brown/Yule 1983:252). Brown/Yule remark that the theory of mental models one hand allow for “a richer representation” than the models of stereotypic knowledge stored in memory (Brown/Yule 1983:255). At the same time, however, they only serve to understand and do not to analyse discourse, as they lack practical details.

(3.) The inferences that the recipient decides to draw

Coherence is also constituted by making inferences. Brown/Yule describe the notion of inference as “the process which the reader (hearer) must go through to get from the literal meaning of what is written (or said) to what the writer (speaker) intended to convey” (p. 255). By inference, the recipient can arrive at the intentional meaning of the sender’s utterance, such as in the following example (p. 256):

A is telling B: “It’s really cold in here with that window open.”

Intentional meaning: Please close the window.

B must infer that A requests him to close the window. In the example below, inference can be thought of as the missing link between two utterances:

a. Mary got some picnic supplies out of the car.
b. The beer was warm.

Through inference, utterance (a.) and (b.) can be linked with each other, namely by forming the bridging assumption that the “picnic supplies” mentioned in (a.) contain the “beer” mentioned in (b.). Thus, the missing link is the information contained in a sentence such as (c.):

c. The picnic supplies mentioned include some beer.

In reference to the previous chapter on the recipient’s socio-cultural knowledge, Brown/Yule state that sentence (c.) expresses information that the recipient activates as part of his stereotypic knowledge, e.g. his “picnic-frame”.

All in all, cohesive links may be clues to the coherence of a text. However, whereas cohesion is a property of a text itself, coherence is assigned to a text solely by the recipient. Brown/Yule demonstrated that cohesive ties are not necessary for the hearer/reader to create a coherent mental representation of what is occurring. Hence, cohesion is not, as coherence, a necessary criterion for a collection of sentences to be called a text.

To summarize, when we encounter an utterance in a communicative situation, we engage in two types of processes (cf. Kohn 1988:108ff):

1.) “Bottom up” processing, i.e. we construct a meaning interpretation by decoding the utterance with our linguistic knowledge. We process the linguistic clues given, e.g. words, structures and intonation.
2.) “Top down” processing, i.e. we use contextual assumptions in order to infer what the speaker of the utterance might have intended to communicate. This kind of inferential processing is guided by pragmatic principles such as cooperativity (cf. Grice 1975), relevance (cf. Sperber/Wilson 1995), local interpretation and analogy (cf. Brown/Yule

illustration not visible in this excerpt

These two processes go hand in hand.

5. Analyzing comprehension problems

5.1. The general nature of comprehension problems

The analysis of comprehension problems in human communication has been the interest of many scholars. Research on this issue is based on the observation that communication inherently is flawed and problematic. Coupland/Giles/Wiemann (1991) state it precisely when they write that “communication is itself miscommunicative” (p. 3). No matter in which area of life, whether it be at work or at home, whether it be with native or non-native speakers of one’s own language, we permanently encounter comprehension problems that can have different sources. Successful comprehension goes beyond the successful and at the same time only superficial understanding of the linguistic expressions of an utterance, as Gass/Varonis (1991) point out:

“No natural speech utterance is ever made in a linguistic vacuum. Each is enriched and empowered by a social history that considers the relationships of class, status, power, and solidarity, and a linguistic history that includes culturally specific rules of discourse [...], politeness [...], conversational maxims [...], conversational inference [...], and patterns of interpretation [...].” (p. 121)

This passage implies that in human communication multiple levels can be identified on which problems of comprehension potentially can emerge. Comprehension problems may be especially critical when the sender and the recipient come from different language backgrounds and cultures and use a lingua franca for communication, i.e. a language of which neither is a native speaker (see chapter 4.4).

Thus it can be foreseen that a proper and effective approach to the analysis of comprehension problems must shed light on a vast array of different possible sources of problematic comprehension. Before presenting such sources, however, it is worth examining the terminological inconsistency in the way to which comprehension problems are referred.

[...]


[1] The abbreviation “NS” refers to “native speaker” and “NNS” refers to “non-native speaker”

[2] As Pickering (2006) points out, “[t]his last level of divining a speaker’s intentions is understandably difficult to measure. Levis reports that the term has largely ‘fallen by the wayside’ (2005, p. 254). There remains a clear distinction in the literature, however, between ‘matters of form,’ comprising formal recognition or decoding of words and utterances, and ‘matters of meaning,’ variously described as ‘comprehensibility,’ ‘understanding,’ or ‘communicativity’ (Jenkins, 2000, p. 71)” (p. 2).

[3] 1 The term code model is the label used by Sperber/Wilson (1986).

[4] The notion of relevance in Relevance Theory is different from the notion of relevance in everyday talk. Within the framework of Relevance Theory, we can also make sense of utterances that are irrelevant in the sense of incoherent. We still understand them because we take what literally is conveyed in the Relevance Theory sense.

[5] As Huang (2007) points out, the approach to relevance here is comparative and therefore “provides clear comparisons only in some cases.”

[6] ’ Huang (2007) introduces two more processes of explicating, namely saturation and ad hoc concept construction. These processes are not of further interest for this thesis.

[7] ' The term free enrichment is borrowed from Recanati (2004).

[8] Huang (2007) points out this analysis can only be applied to what is treated as a particularized conversational implicature in the Gricean theory (see chapter 3.3) and not as a generalized conversational implicature. For example, in a sentence like, “Some of John’s friends are vegans”, it is unclear what the implicated premise would be (Huang 2007:195).

[9] As Chafe (1992) states in the Oxford International Encyclopaedia of Linguistics: “The term ‘discourse’ is used in somewhat different ways by different scholars, but underlying the differences is a common concern for language beyond the boundaries of isolated sentences. The term TEXT is used in similar ways. Both terms may refer to a unit of language larger than the sentence: one may speak of a ‘discourse’ or a ‘text’” (Chafe 1992:356 cited in Widdowson 2004:6).

Excerpt out of 232 pages

Details

Title
An Analysis of Comprehension Problems based on Discourse Analysis and Relevance Theory
Subtitle
Field of Study: English as a Lingua Franca
College
University of Tubingen
Grade
1,5
Author
Year
2010
Pages
232
Catalog Number
V175623
ISBN (eBook)
9783640967483
ISBN (Book)
9783640967728
File size
65676 KB
Language
English
Keywords
comprehension, English, ELF, discourse analysis, relevance theory, Yule, Brown, Sperber, Wilson, corpus, discourse, code model, Grice, maxims, coherence, misunderstandings, utterances, implicature, explicature, native speakers, non-native speakers, english as a lingua franca, lingua franca, interaction, interactions, mother tongue
Quote paper
Christian Kreß (Author), 2010, An Analysis of Comprehension Problems based on Discourse Analysis and Relevance Theory, Munich, GRIN Verlag, https://www.grin.com/document/175623

Comments

  • No comments yet.
Look inside the ebook
Title: An Analysis of Comprehension Problems based on Discourse Analysis and Relevance Theory



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free