Language vs. Music? Exploring Music’s Links to Language


Term Paper (Advanced seminar), 2011
24 Pages, Grade: 2,0

Excerpt

Contents

1 Introduction

2 Language vs. Music? Exploring Music’s Links to Language
2.1 Comparing the Structure of Language and Music
2.1.1 Structural Units
2.1.2 Rhythm in Language and Music
2.2 Language Processing vs. Music Processing? Comparing the Neural Processing of Language and Music
2.2.1 The Cerebral Hemispheres and their Function in Language Processing
2.2.2 Music Perception
Musical Syntax
Musical Semantics

3 Conclusion

4 Bibliography

1 Introduction

Language and music—both can be found in every human society—are the most basic socio-cognitive domains of the human species. At first glance, they share fundamental similarities, such as being based on acoustic modalities and involving complex sound sequences. Language, as well as music, functions as a means of communication and a form of expression. Both systems are organized into hierarchically structured sequences, and a written system was developed for language and for music.

The interest in music-language relations has a long history, of course, and does not originate with modern cognitive science:

The topic has long drawn interest from a wide range of thinkers, including philosophers, biologists, poets, composers, linguists, and musicologists. Over 2,000 years ago, Plato claimed that the power of certain musical modes to uplift the spirit stemmed from their resemblance to the sounds of noble speech (Neubauer, 1986). Much later, Darwin (1871) considered how a form of communication intermediate between modern language and music may have been the origin of our species’ communicative abilities. Many other historical figures have contemplated music-language relations, including Vincenzo Galilei (father of Galileo), Jean-Jacques Rousseau, and Ludwig Wittgenstein. This long line of speculative thinking has continued down to the modern era (e.g., Bernstein, 1976). In the era of cognitive science, however, research into this topic is undergoing a dramatic shift, using new concepts and tools to advance from suggestions and analogies to empirical research.[1]

The production of music and language is a prime example of the human brain’s capacities. But does the brain process music as it processes language? Are language and music processed in the same hemisphere(s)? Are linguistic and musical irregularities processed by the same brain area(s)? What are the cognitive differences and similarities? And how can brain activity be measured? These and other very complex questions are to be approached in this seminar paper. The central interest is to explore and compare some of the structural and cognitive properties of language and music (and the links between them) in order to find out whether music is language-like in certain regards. The central questions are: Does music have something like a grammar or syntax? Is music able to transfer meaningful information?

Chapter 2.1 examines the structural units of language and music (2.1.1) as well as rhythm in language and music (2.1.2). Chapter 2.2 is to compare the neuronal processing of language and music. Chapter 2.2.1 provides a basis for the understanding of language and music processing in the brain and deals with the issue of language lateralization, the brain’s main language areas etc. Chapter 2.2.2 presents music perception models and examines whether syntax and semantics are concepts that can be applied to music. The use of comparative research is to provide insights into the architecture of both music and language.

2 Language vs. Music? Exploring Music’s Links to Language

Patel argues that “from the standpoint of modern cognitive science, music-language relations have barely begun to be explored.”[2] However, the interdisciplinary relations between language and music have become more and more interesting for researchers from all over the world. Patel has an explanation for the general interest in this research field:

Humans are unparalleled in their ability to make sense out of sound. In many other branches of our experience (e.g., visual perception, touch), we can learn much from studying the behavior and brains of other animals because our experience is not that different from theirs. When it comes to language and music, however, our species is unique (…).[3]

“While different cultures have different musical forms, it seems likely that there are some universal (probably biological) connections between language and music.”[4] What is certain is that musical parameters play a very important role in the comprehension of a language:

Alle formalen, suprasegmentalen Elemente der Sprache sind musikalischer Natur – Melodie, Rhythmus und Dynamik der Sprache. Obwohl formal, bestimmen sie doch stark den Inhalt der Aussage, präzisieren ihn und verkürzen die Redezeit. [5]

Before taking a closer look at some of the suprasegmental characteristics of language and music in chapter 2.1.2, the structural units of both systems are briefly compared.

2.1 Comparing the Structure of Language and Music

2.1.1 Structural Units

Every human infant is born into a world with two distinct sound systems. The first is linguistic and includes the vowels, consonants, and pitch contrasts of the native language. The second is musical and includes the timbres and pitches of the culture’s music.[6]

Language and music are acoustic phenomenons and rely on changing acoustic patterns that are modulated in addition to pitch evolvement. The basis for both systems is a limited set of sounds and signs, which—according to established rules—can be combined, providing endless possible combinations. Every language is based on a limited repertoire of phonemes, the smallest units of sound, and—due to language-specific phonological rules—a limited number of syllables, which allow endless possible combinations to semantically meaningful units, such as morphemes, words, clauses and phrases. The rule inventory of these possible combinations to longer phrases is determined by the language-specific grammar.[7] Our language system is modularly organized into syntax, morphology, phonology, semantics etc. The syntax module, for example, produces syntactic strings on the basis of a systematic and recursive rule system. Language follows the “Principle of Compositionality”, also known as “Frege’s Principle”: the meaning of a sentence is determined by the sum of all parts of it and the rules used to combine it. The meaning of a word is acquired; you learn it with the word itself. But how is music structured?

Music, the art of arranging the sounds of voices or instruments, is also based on a limited number of sounds, notes, or tones. Every sound has several perceptual aspects, such as pitch, loudness, length, and timbre. Each of these properties can vary independently of the other, but the human mind is nevertheless able to distinguish “several categories along any of these dimensions”.[8] “Some sort of musical scale is widely used among many different cultures. All divide up the octave.”[9]

In Western European “equal-tempered” music (the basis of most of Western music today), each octave is divided into 12 equal-sized intervals such that each note is approximately 6% higher in frequency than the note below. This ratio is referred to as a “semitone.” (…) The 12 semitones of the octave are the “tonal material” of Western music (…): They provide the raw materials from which different scales are constructed.[10]

Several tones at the same time can be combined to intervals and harmonies. Certain intervals appear in the scales of many cultures:

For example, the fifth (which is the most important interval in Western music after the octave) is also important in a wide range of other musical traditions, ranging from large musical cultures to India and China to the music of small tribes in the islands of Oceania (…).[11]

Musical sequences follow certain principles of harmony. Harmony refers to movements from one pitch simultaneously to another, and the structural principles that govern the chord progressions. Harmonic principles explain—and predict—chord progressions. In Western music, harmonic principles are governed by the Circle of fifth, which determines the next harmonical sound. It encodes harmonic relations and distance between the keys used in a musical sequence. In the fifth, a key is harmonically closest to another one if it is a neighboring key. Progressions such as D-G-C, for example, are common and perceived as harmonic. Harmonic principles, thus, trigger musical expectations just like a “linguistic grammar” does. Hierarchical principles organize a musical sequence as a whole on the basis of harmonical similarity, beat, and other grouping principles. But do harmonic regularities make up a musical syntax? Can these principles be defined as “musical grammar”? We’ll come back to these questions in chapter 2.2.2.

2.1.2 Rhythm in Language and Music

Rhythm is another important structural element of language and music, as linguistic and musical acoustic signals are rhythmically organized:

Speech and music involve the systematic temporal, accentual, and phrasal patterning of sound. That is, both are rhythmic, and their rhythms show both important similarities and differences. One similarity is grouping structure: In both domains, elements (such as tones and words) are grouped into higher level units such as phrases. A key difference is temporal periodicity, which is widespread in musical rhythm but lacking in speech rhythm.[12]

Languages can be divided into three groups based on their rhythmical characteristics:

1) Stress-timed languages, such as English and German (the syllables are stressed at roughly regular intervals, unstressed syllables are often shortened or weakened)
2) Syllable-timed languages, such as Spanish and French (these languages are timed by the syllables that are stressed)
3) Mora-timed languages, such as Japanese (the rhythmic units are moras, phonological units determining the syllable weight)

Infants are very sensitive for the rhythmic characteristics of language and learn quickly to follow the rhythmic-prosodic characteristics of the language they hear. Studies also revealed that infants perceive the rhythmical structure of music.[13] Patterns of rhythm, stress, intonation, phrasing, and contour most likely drive the early learning in both language and music:

Such prosodic information is the first human-produced external sound source available in utero; the filtering properties of the fluid-filled reproductive system leave rhythmic cues intact relative to high-frequency information. Fetuses avail themselves of the incoming rhythmic patterns; (…) this is a process of implicit, nonreinforced learning.[14]

Language and music share a similar coding system, that is, both relate to temporal patterns such as time, stress, and pauses. The succession of time intervals with different durations establishes rhythm and beat. In language, the time patterns do not function as autonomous structural units, but depend on other linguistic levels, such as morphology. In music, in contrast, time patterns play an autonomous role: they structure sound sequences and create musical segments. Patel emphasizes further differences between language and music:

To take a few examples, music organizes pitch and rhythm in ways that speech does not, and lacks the specificity of language in terms of semantic meaning. Language grammar is guilt from categories that are absent in music (such as nouns and verbs), whereas music appears to have much deeper power over our emotions than does ordinary speech. Furthermore, there is a long history in neuropsychology of documenting cases in which brain damage or brain abnormality impairs one domain but spares the other (e.g., amusia and aphasia). Considerations such as these have led to the suggestion that music and language have minimal cognitive overlap (e.g., Marin & Perry, 1999; Peretz, 2006).[15]

[...]


[1] Aniruddh D. Patel (2008): Music, Language, and the Brain. Oxford/New York: Oxford University Press, p. 4. Patel is a senior fellow in Theoretical Neurobiology at the Neurosciences Institute in San Diego, who based his work on research from the field of cognitive science and neuroscience.

[2] Patel (2008): Music, Language, and the Brain, p. 3.

[3] Patel (2008): Music, Language, and the Brain, p. 3.

[4] Bernard J. Baars (2007): “Prosody and melody”, in: Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience, edited by Baars & Nicole M. Gage. Amsterdam, Boston, Heidelberg et al.: Elsevier, p. 391.

[5] Stephan Sallat (2008): „Sprache und Musik“ in: Musikalische Fähigkeiten im Fokus von Sprachentwicklung und Sprachentwicklungsstörungen. Wissenschaftliche Schriften im Schulz-Kirchner-Verlag. Reihe 3: Beiträge zur Sprach- und Literaturwissenschaft, Band 118. Idstein: Schulz-Kirchner, p. 5. Original quotation in: Johannes Pahn (ed.) (2000): „Musik in der Sprache – Sprache in der Musik“, in: Sprache und Musik: Beiträge der 71. Jahrestagung der Deutschen Gesellschaft für Sprach- und Stimmheilkunde e.V., Berlin, 12.-13. März 1999. Stuttgart: Steiner, p. 124.

[6] Patel (2008): “Musical Sound Systems” in: Music, Language, and the Brain, p. 9.

[7] Cp. Sallat (2008): „Sprache und Musik“, p. 6.

[8] Cp. Patel (2008): “Musical Sound Systems” in: Music, Language, and the Brain, p. 12.

[9] Baars (2007): “Prosody and melody”, in: Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience, p. 335. – There are 12 available pitches per octave in Western music, out of which 7 are usually chosen to make a musical scale (e.g. the diatonic major scale). Cp. Patel (2008): “Musical Sound Systems” the Brain, p. 17.

[10] Patel (2008): “Musical Sound Systems”, p. 15.

[11] Patel (2008): “Musical Sound Systems”, p. 16.

[12] Patel (2008): “Rhythm”, in: Music, Language, and the Brain, p. 177.

[13] Cp. Sallat (2008): „Sprache und Musik“, p. 8. Sallat refers to a study conducted by Jusczyk & Krumhansl in 1993 (cp. “Pitch and Rhythmic Patterns Effecting Infants’ Sensitivity to Musical Phrase Structure”, in: Journal of Experimental Psychology: Human Perception and Performance, 19 (3), 627-640).

[14] Erin McMullen & Jenny R. Saffran (2004): “Music and Language: A Developmental Comparison”, in: Music Perception, Vol. 21, No. 3. University of Wisconsin-Madison, p. 294.

[15] Patel (2008): Music, Language, and the Brain, p. 4.

Excerpt out of 24 pages

Details

Title
Language vs. Music? Exploring Music’s Links to Language
College
Humboldt-University of Berlin  (Institut für Anglistik und Amerikanistik)
Course
Language vs. Culture? A Comparison between Language and Music
Grade
2,0
Author
Year
2011
Pages
24
Catalog Number
V175041
ISBN (eBook)
9783640959006
ISBN (Book)
9783640958573
File size
961 KB
Language
English
Tags
language, music, comparison, structure, structural units, rhythm, language processing, music processing, music perception, neural processing, cerebral hemispheres, musical syntax, musical semantics
Quote paper
Jeanette Gonsior (Author), 2011, Language vs. Music? Exploring Music’s Links to Language, Munich, GRIN Verlag, https://www.grin.com/document/175041

Comments

  • No comments yet.
Read the ebook
Title: Language vs. Music? Exploring Music’s Links to Language


Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free