EMBE (Erklärungsmodell zur Bedeutungserzeugung). Different Levels of Concepts, Objects and Actions


Research Paper (postgraduate), 2018
17 Pages, Grade: 0

Excerpt

Inhalt

1. Introduction
1.1 Semantics and meaning
1.1.1 Semantic memory and semantic representations

2. Grounded/embodied and amodale/multimodale cognitive theories

3. Not at all an “utterly mysterious” process
3.1 Different levels of object systems
3.1.1 Elements of the first level
3.1.2 Elements of higher levels
3.2 The relation between first level signs and second level signs

4. Object, element, particle, entity, symbol, sign

5. Nonsense or evident?

Bibliographie

1.Introduction

The ability not only to experience shapes and colours, to hear sounds, to smell odours, to comprehend movement, to feel haptics, to experience feelings but to find meaning and sometimes also purpose in perceived “things” is the core function of what we call ›cognition‹, and numerous scientific theories consider this ability to be inseparable connected with language acquisition. But how does meaning originate in the course of information-decoding and -generation and does it really arise in human organisms exclusively, on the basis of or in association with language only? The present paper would like to puzzle out a few parts of this riddle. For this, expansions are necessary: looking beyond the brain, looking beyond the (expression-generating and -using) organism and looking beyond human species.

1.1 Semantics and meaning

The terms ›semantics‹ (Michel Bréal published his Essai de Sémantique in 1897 ) and ›semasiology‹ (first used by C. K. Reisig in the 1820)[1] have been introduced to linguistics during the 19th century (Nöth 2000, 159). Scientific examination of semantic aspects of language and other signs or sign systems is closely related to the development of semiotic concepts in philosophy and linguistics. Accordingly, semantics as theory of meaning (Nöth 2000, 158) depends on the underlying concepts of meaning – and there are many different of them, which in turn is not least due to the fact that they play a more or less important role in numerous disciplines.

1.1.1 Semantic memory and semantic representations

In 1972 Tulving implemented a distinction between episodic and semantic memory. The former includes personally experienced events and is about what happens in certain places at certain times – about the ›what‹, ›where‹ and ›when‹ (Tulving 2006, 52) it has evolved from semantic memory and they share numerous common characteristics. Tulving (2006, 54) also states that it represents a relatively recent, late-developing and early-impaired past-oriented memory system with a higher probability for dysfunctions. However, he has to admit that the number of evidence for an episodic memory (Tulving 2006, 59) remains low.

In the semantic system general facts – knowledge about people, objects, actions, relations, self, and culture acquired through experience (Binder et al 2009, 2767) – are stored; as well as for the episodic one, so far no agreement has been reached what and where it actually is: Neural systems that store and retrieve semantic information have been studied intensively, “a consensus regarding their identity has not been reached” (Binder et al 2009, 2767).

In the following, in addition to the discrimination of episodic from semantic aspects of memory – the distinction regarding representations that process information material from the particular sensory channels versus semantic (conceptual) representations (Leshinskaya & Caramazza 2015, 27) will be discussed.

2. Grounded/embodied and amodale/multimodale cognitive theories

There are two opposing views within cognitive science how and on which basis meaning, knowledge and on the scale of things cognition develops. The representatives of grounded, embodied or enactive theories challenge the assumption of traditional cognition theories,

that cognition is computation on amodal symbols in a modular system, independent of the brain’s modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. (Barsalou 2008, 617)

Ward (2015, 267) notes that currently discussed models of knowledge and memory include numerous, differently weighting positions between the two extremes ›fully grounded‹ versus ›amodal/multimodal‹. For example, both “distributed-only view” (e.g. Damasio 1989) and “distributed-plus-hub view” (Patterson et al 2007) assume an interaction of semantic (linguistic) and sensorimotor representations, albeit in distinct ways. Recent publications (e.g. Borghesani 2017) no longer demonstrate mutually exclusive but partly complementary positions. Barsalou, an important representative of grounded theory, notes this approach:

Researchers who once denied that the modalities had anything to do with cognition now acknowledge their potential relevance. The empirical evidence that the modalities have something to do with cognition has become compelling. Nevertheless, most researchers in cognitive psychology and cognitive science are not ready to completely abandon traditional theories. One widely held view is that simulations in the modalities play peripheral roles in cognition, while classic operations on amodal symbols still play the central roles. (Barsalou 2008, 631)

Some cognitive scientists accept the idea of discriminating concrete from abstract concepts as a compromise between the two hypotheses (Gallese & Lakoff 2005; Borghesani 2017, 313); Gallese and Lakoff (2005) are convinced that the “sensory-motor system is required for understanding at least concrete concepts” and they consider this as an serious “difficulty for any traditional theory that claims that concrete concepts are modality-neutral and disembodied” (Gallese & Lakoff 2005, 468). Another point of view is that to the extent that word-meaning and sensory-motor experience “rely on the same neural machinery, it is more likely that word meanings are recapitulations of sensory experiences, at some level of description” (Bedny & Caramazza 2011, 81).

Anyway, traditional (amodal) approaches of cognition still assume that knowledge is to find in a “semantic memory system separate from the brain’s modal systems for perception (e.g., vision, audition), action (e.g., movement, proprioception), and introspection (e.g., mental states, affect)” (Barsalou 2008, 618). On the contrary, representatives of grounded theories postulate, that knowledge is the result of capturing modality-specific states e.g. through the support of neighboring memory systems (and not by means of translation into amodal symbols): When an object is perceived, a set of property detectors are activated – e.g. in the visual system –, afterwards connected neurons in an adjacent “association area” interlink the active properties and save them as memories (Barsalou et al 2003, 85).

Later, in the absence of visual input, these conjunctive neurons partially reactivate the original set of feature detectors to represent [for example a] car visually. Such re-enactments or simulations are never complete and might be biased. Nevertheless they provide the cognitive-level representations that support memory, language and thought. (Barsalou et al 2003, 85)

Both approaches – across the differences regarding theoretical details and gradual positioning – are confronted with difficulties of justifying their points of view. Regarding grounded theories the most urgent one is the lack of explanations how abstract concepts emerge. Can this be done by simulation alone? In return, traditional cognitive theories are notably confronted with the “Symbol Grounding Problem”. Symbols need to be grounded – Harnad (1990) posits that assuming a complete separation between a central symbolic system and peripheral input/output systems the formation of semantic knowledge cannot be explained. Some kind of interaction between the input and output systems and the symbolic system appears to be necessary. It is still an open question how this could be implemented (Borghesani 2017, 36). Following this, it is demanded that, regardless the fact that models operating with amodal symbols are able to mimic cognitive abilities, traditional approaches have to verify how „compu­tation on amodal symbols constitutes the underlying mechanism. Furthermore, amodal symbols must be localized in the brain, and neural principles for processing them explained” (Barsalou 2008, 631f).[2]

3. Not at all an “utterly mysterious” process

Welzer and Markowitsch (2006, 10) point out that – even if neurosciences are not actually talking about mind, consciousness or memory, but about information, which is processed in the neural networks of the brain – human brains are not only processing information in the sense of reaction-inducing sensory stimuli but principally meaningful con­ceptions. They postulate that the ability to give meaning to a perception is something that is unique to human beings: between the immediate sequence of stimulus and reaction, impulse and action, they presume a process of interpretation which allows an optimized exploitation of the given possibilities of action. But how and where does the “process of interpretation” addressed by Welzer and Markowitsch occur in the brain? In his popular-scientific book “Reading in the Brain” (2009) French neuropsychologist Stanislas Dehaene declares that “although researchers have managed to map several of the relevant brain areas, how meaning is actually coded in the cortex remains a frustrating issue. The process that allows our neuronal networks to snap together and ‘make sense’ remains utterly mysterious” (Dehaene 2009, 111). Dehaene refers to the activity of reading and thus to the meaning of written and spoken linguistic signs. However, experimental arrangements show matching neural patterns regardless of whether linguistic signs or images are used as activation stimuli in tasks (Fairhall & Caramazza 2013; Binder et al 2009; Devereux et al 2013; Kumar et al 2017).

A much-quoted meta-analysis of 120 studies (Binder et al 2009) focused on PET and fMRI studies in which spoken or written words exclusively were used as stimuli to mark out the system of brain areas, which accesses meaning specifically through words. Outcome of this evaluation was a scheme distributed over almost the entire cortex. The neural systems suitable for the storage and restoration of so-called semantic knowledge are thus “widespread and occupy a large proportion of the cortex in the human brain” (Binder et al 2009, 2783). Kumar et al (2017, 423) point to the fact that studies using pictorial stimuli account for a similarly distributed network of brain regions – their role regarding generation of meaning remains undefined to many parts. Overall, there is little evidence on neuronal basis, which would make either a fundamental distinction between modal, multimodal and semantic processes in respect of their contribution to conceptualization, or the presumption of a simulation mode in combination with an association area[3], in which memory is first created and saved (Barsalou et al 2003, 85), obvious or even the only feasible explanation. To cite the study of Kumar et al (2017, 429) again: “It may be that the dichotomy that is sometimes assumed between semantic and visual information is at least partly artificial.”

Commonly-accepted knowledge is that perception develops in stages, therefore hierarchically, as much as in parallel processes (Jeon 2014, 1; Matusz et al 2017). Based on these findings, the following chapters will introduce a model renouncing the assumptions a) of a principal difference of neurobiological procedures regarding modal and plurimodal perception versus semantic processes, b) of amodal symbols and c) that generation of meaning rests upon simulation.[4]

3.1 Different levels of object systems

Initially, the differentiation between semantic (linguistic) and representations from the modalities has to be questioned. It is – of course – not in question, that there are partly diverging neural paths (or areas) regarding the perception of a word, a picture or the corresponding object; a fact, which is the basis of selective impairments, first noticed by Warrington (1975). But it is in question, why the perception of a word (strings of letters, phonetic features, gestures) is referred to as something different than modal, whereas the perceptions of pictures or “things” are termed to be ›modal‹. In this context, necessity and reasonableness of the conceptual differentiation between concept and percept also must be reconsidered. In any case, many scientists no longer understand conceptual as exclusively language-based knowledge (Nelson 2008; Barsalou 2017; Kumar et al 2017). Neural patterns that show great agreement, regardless of whether they are triggered by words or corresponding objects or images of objects, (Binder et al 2009, 2767; Fairhall & Caramazza 2013, 10552; Devereux et al 2013; Kumar et al 2017) speak in favor of giving up this distinction.

Indeed, EMBE moves one step beyond this and pleads for a more radical extension of the term ›concept‹ – namely it’s equalization with ›sensation including computation‹ (as much as we know, it will be difficult to prove that there is any sensation without computation): Knowledge or meaning is always generated by sensory, visceral, motor, hormonal etc. processes, it takes place within modal sensation as well as across modalities and is not dependent on apperception. Therefore, regarding knowledge or meaning, it is proposed not to set the focus on a distinction between semantic, perceptual or sensorimotor representations, but between a first level object system and higher level symbol systems[5]. Higher level object systems are, in simple terms, systems starting with the linguistic sign system, whereas the first level object system is in principle and mainly pre- or extralinguistic and its source is everything perceptible to an organism with the specific relations between these percepts (this may – even if it sounds paradoxical initially ­– include language). Regarding language-enabled homo sapiens the two levels intertwine seamlessly and work together supporting the achievement of cognitive results.

3.1.1 Elements of the first level

The first level object system entirely consists of what an organism is capable to perceive in- and outside itself and which is, based upon proximity of time and/or space, interlinked. On this view even organisms without the ability to create new elements (e.g. linguistic signs) are generating meaning, insofar as generation of meaning is understood as level-internal and hierarchical computing of symbols to more complex and most complex symbols, including not only perceptual, sensory, motor but also hormonal processes of and between organisms as well as between organisms and other entities. Recognizing an object visually is already the result of many processes of meaning-generation. This analogously applies to all perceptual modalities. Also thinking is based on the tendency of cells[6] to come into contact with one another (or with suitable partner entities) – intra-modal and cross-modal, intra- and extra-organic – and thus provide the basis for generating clusters with ever more complex properties (meanings).

It should be pointed out that the first level sign system is by no means “mentalese” (Margolis & Laurence 2015, 119f; Thomas 2014) or a similar language of thought. The “Language of Thought Hypothesis” assumes that thinking is carried out in a special mental language (Harnad 1990; Zalta et al 2015). On the contrary, EMBE considers human language systems as one constituent part, and all non-linguistic objects or object systems (always physical, mental or non-mental elements) as the other constituent part of human cognitive performance.

Meaning is perception of properties and all perception is always perception of property. The most fundamental perception possible is: There is something – the sensation of an element that is not (yet) or not (yet) clearly determinable on the highest grades of consciousness (which may be the lowest grade of consciousness). At least two entities are always involved,[7] regarding the most basic perception just described: the perceived, an external or internal physical entity, and the perceiving one, always an internal, likewise physical entity.[8] Neither consciousness in the narrower or wider sense nor a brain or a nervous system are needed for this basal process. The most fundamental property – being perceived – is of course prerequisite for the acquisition of additional properties (further meanings). By way of example, the – already conscious and explicit –perception ›solid-bodied‹ can, if perceived within a narrow time and space frame, act as further characteristic for ›green‹ and vice versa. Complexity and richness of meaning are increasing with more and complex properties perceived. More differentiated sensation-systems and higher synchronization between different modalities are equitable with more complex combinations of properties up to the emergence of meaning in its everyday sense of having an idea of something and knowing what and wherefore something is. Higher levels of synchronization between the different modalities were facilitated by the development of nervous systems (Keijzer, van Duijn & Lyon 2013) and nervous systems with central switch point.

Human beings like all other metazoans are living entities made up of numerous smaller organisms. Concerning the individual or subordinate-interlinked organisms of metazoans, depending on the given hierarchical stadium, different elements are perceptible and thus meaningful with respect of a potentially essential reaction. It would be very counter-productive if all these reactions would take place involving consciousness. With this, EMBE considers conscious perception as imposed by the highest hierarchical stage of the neurological system of an organism (which may be a very low hierarchical stage, measured by human standards), which is able to provide contents of perception insofar coordinated and compact, to be taken as a basis for decisions empowering the organism in its entirety to act.

[...]


[1] For different denotations of ›semiology‹ and ›semantics‹ see Nöth 2000, 158–160.

[2] Dehaene 2009, 111.

[3] Please notice: EMBE does not contest the existence of association areas (in the contrary: probably every area in the brain is an association area) but the existence of simulation modes.

[4] The terms ›object‹, ›symbol‹, ›sign‹, ›entity‹, ›element‹ and ›particle‹ are used synonymously within EMBE

[5] Higher-level signs can be understood as ›semantic‹ in the sense that they are predominantly based on word networks/word meaning.

[6] According to EMBE, organic cells are types of objects.

[7] This is a shortened account to serve theoretical needs – it should be remembered that the minimal two elements again consist of objects of subordinate hierarchies.

[8] In many cases, the positions ›perceptible‹ and ›perceptive‹ are of course interchangeable or perspective-depend.

Excerpt out of 17 pages

Details

Title
EMBE (Erklärungsmodell zur Bedeutungserzeugung). Different Levels of Concepts, Objects and Actions
College
University of Salzburg
Grade
0
Author
Year
2018
Pages
17
Catalog Number
V419325
ISBN (eBook)
9783668680876
ISBN (Book)
9783668680883
File size
689 KB
Language
English
Tags
concept, object, language, symbol, cognition, semantics, neurocognition
Quote paper
Barbara Scheibner (Author), 2018, EMBE (Erklärungsmodell zur Bedeutungserzeugung). Different Levels of Concepts, Objects and Actions, Munich, GRIN Verlag, https://www.grin.com/document/419325

Comments

  • No comments yet.
Read the ebook
Title: EMBE (Erklärungsmodell zur Bedeutungserzeugung). Different Levels of Concepts, Objects and Actions


Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free