Excerpt
Contents
1 INTRODUCTION
2 FUNDAMENTAL ASPECTS ABOUT AI TODAY
2.1 General
2.2 Development of artificial intelligence
2.3 Symbolic and connectionist paradigm
3 ANTHROPOMORPHISM IN ARTIFICIAL INTELLIGENCE
3.1 A BRIEF PSYCHOLOGICAL THEORY ON ANTHROPOMORPHISM
3.2 The use and possible reasons for anthropomorphism in artificial intelligence
3.3 Beyond human mind
4 INTERIM CONCLUSION
5 TURING AND ANTHROPOMORPHISM
5.1 The "imitation game"
5.2 Anthropomorphism used by Turing
5.3 Discussion of the Turing Test
6 PRE-HISTORY OF AI
7 CONCLUSION
1 Introduction
Artificial intelligence (AI) is one of the newest scientific fields, starting in the middle of the 20th century with the goal of creating intelligent entities (Russel & Norvig, 2010). Nonetheless the scientific roots of the field reach far behind since the history of humankind - of homo sapiens - has always been coined by the goal of understanding what intelligence is. Therefore, AI is a highly interdisciplinary field of science including engineering, philosophy, mathematics and logics, psychology and other natural sciences. The high potential for controversies in such an interdisciplinary field of study becomes already obvious by the fact that there is no common definition of what intelligence is (Boden, 1981). Is intelligence something that goes beyond the natural materialistic world and is "human-exclusive"? Or can it be reached by technical reproduction of the human brain and its cognition?
One more and more frequently discussed problem that lies in such questions and even in the term "artificial intelligence" itself is the issue of anthropomorphizing AI. This often leads to a wrong perception of AI for laymen as well as for researchers and professionals resulting in ethical and epistemological problems (Salles, Evers, & Farisco, 2020). The term "artificial intelligence" can be traced back to John Mc Cathy who used it as the name for a conference in Dartmouth (Pallay, 2020) instead of talking of cybernetics - which was the more frequently used term for self-regulatory systems (Zimmerli & Wolf, 1994). Since intelligence is perceived as something intrinsically human by most people, the tendency to anthropomorphize in the field of AI can already be found at this very beginning of the discipline.
But even before the term "artificial intelligence" was used for the discipline, Alan Turing published a seminal work on the question "can machines think?" which can be seen as one of the most influential works since the beginning of AI, predicting several developments already years in advance (Turing, 1950; Muggleton, 2014). Nevertheless, Turing is criticized for anthropomorphizing his descriptions and possibilities of intelligent machines (Proudfoot, 1999).
In this essay the question will be discussed whether or not Alan Turing's concept of the "imitation game" enhanced and enhances the tendency for anthropomorphism in the field of AI - and if so, to what extent. Therefore, this essay will attempt to give a critical explanation of the concept of anthropomorphism in the field of AI from a historical point of view. First of all, a brief summary on general aspects of AI will be given, with a focus on the development of AI since Alan Turing's work on intelligent machinery. Afterwards a short description of the concept of anthropomorphism will be discussed, including also a psychological point of view. Based on that, examples for anthropomorphism in AI will be analyzed within the context of the previously described development of the discipline, focusing specifically on aspects and concepts in the development of AI that have led to anthropomorphisms in particular.
In this context Alan Turing's "Imitation Game" will be discussed focusing on the question whether the way the test is designed enforces the anthropomorphism in AI and what impact it had on the further development of the discipline. At the end a brief description of the pre-history of AI should give a possible explanation on which historical basis Turing describes his test and why it is possibly misinterpreted in some cases.
Therefore, in this essay the concept of anthropomorphism itself is the subject of study and not its consequences.
2 Fundamental Aspects about AI today
2.1 General
Before focusing on anthropomorphism in AI it is crucial to mention some basic aspects of the field to better understand what the state of the art is and how artificial intelligence works today. The first problem that needs to be addressed is the missing definition of what intelligence in general and artificial intelligence in particular is. Most books and AI researchers characterize AI as the science of making machines perform tasks that could previously only be carried out by human beings (Bolander, 2019; Boden 1981). When using this characterization, one should be very critical about the human-centric view on intelligence that is carried by this perspective. Russel and Norvig therefore separate the development of artificial intelligence in a human-centered part and a rational part (Russel & Norvig, 2010). It is important to mention at this point that AI is not the science of computers but rather of computer programs (Boden, 1981).
Currently AI programs are very specific and most programs are only capable of solving single tasks and tend to fail when addressing multi-dimensional problems. Nonetheless AI often outperforms humans when it comes to problems it is capable of solving (Bolander, 2019). But what kind of problems are these? This question is reduced by some AI researchers to a simple rule of thumb: the easier the problem is for humans the harder it is for AI programs to solve. More correctly this means that it needs to be possible to clearly define a problem. Therefore, human and artificial intelligence are complements at the moment (Bolander, 2019). At this point it is important to mention the concepts of strong and weak AI. Strong AI is based on the assumption that machines can have their own mind, while weak AI states that real intelligence can only be simulated by machines (Kaplan, 2017). Recent trends in AI try to focus on the development of artificial general intelligence through deep neuronal networks, a technique that imitates the basic principle of neurons and can thereby simulate a learning process that is known as machine learning (Bolander, 2019). This approach of creating AI is called the connectionist approach. Another approach that was mainly focused in the beginning of the development of AI is the symbolic or cognitive approach (Zimmerli & Wolf, 1994; Bolander, 2019). How these two approaches developed in the history of AI will be described in the following section to show which influence these different paradigms have on the anthropomorphism in AI.
2.2 Development of artificial intelligence
Zimmerli and Wolf split up the history of artificial intelligence into the discussion history and the pre-history. They regard Turing's paper on "computing machinery and intelligence" as the beginning of the philosophical discussion on AI (Zimmerli & Wolf, 1994). The development of the field since then will be described in this part of the essay. As previously mentioned, the field of AI can be split up into two basic paradigms: cognitivism or symbolic AI and connectionism or sub-symbolic AI. In his paper Turing himself already described the different possible development directions on how AI could be achieved. Even though the basis for the connectionist paradigm was already laid very early in the development of AI with the Hebbian Theory and the Rosenblatt Perceptron, research was focused on the symbolic paradigm in the beginning (Zimmerli & Wolf, 1994). As mentioned before the beginning of the AI era was introduced with the Dartmouth conference. At this beginning stage the research was focused on learning about cognitive processes and using the processing methods of digital computers to make assumptions on what human thinking is and how it works (Zimmerli & Wolf, 1994). The basic assumption of this approach is that thinking is the manipulation of symbols. Therefore, complex patterns of thought and action can be reduced to simple mechanisms. Following from that the method of this paradigm is rather top-down, developing static programs out of observed cognitive rules that can be used to solve different problems. One of the most important approaches of this time was pursued by Newell and Simon who developed the general problem solver. For the development of the program, peoples problem solving strategies were observed and formalized with the goal of finding general patterns that could be used to solve different tasks with one program (Zimmerli & Wolf, 1994). Based on that work Newell and Simon formulate the physical symbol system hypothesis: "a physical symbol system has the necessary and sufficient means for general intelligent actions [...] by 'general intelligent action' we wish to indicate the same scope of intelligence as we see in human action" (Newel & Simon, 1976). This means that symbol manipulation is the necessary basis for all kinds of intelligent thinking and as a result thereof machines can reach human-level intelligence because the symbol system is a sufficient means to reach intelligence (Pallay, 2020).
But further developments of the general problem solver push against the limits of this approach because it was not possible to show that the program was really capable of solving different problems (Zimmerli & Wolf, 1994).
That is why in the 1980s a paradigm shift from the symbolic to the connectionist paradigm took place. The Rosenblatt perceptron whose development stagnated since the 1960s was further developed and multi-layer neuronal networks became the preferred way of programming AI. The novelty of this approach compared to the symbolic approach is that the algorithms allow machine learning based on statistics and reinforcement (Zimmerli & Wolf, 1994; Russel & Norvig, 2010).
Both paradigms were foreseen by Alan Turing who in the last chapter of his paper on "computing machinery and intelligence" describes different strategies on achieving learning machines: through programming which is rather top-down and can be compared to the symbolic approach and through a learning child machine which can be assigned to the connectionist paradigm (Turing, 1950; Muggleton, 2014).
By analyzing the historical development of AI, we can see the principle assumptions on which it is based. This makes it clearer to understand why anthropomorphism has always accompanied the research field and how it is deeply connected to its structure of thinking.
2.3 Symbolic and connectionist paradigm
In this chapter the present state of the differences between connectionism and the symbolic approach will be outlined in more detail. Bolander describes the difference as follows. Connectionism is the mimicking of neuronal processes (for example neurons) and symbolic AI follows an abstract model of human problem solving. This top-down approach should simulate the highest level of human cognition and has the advantage of predictability and explainability while it is generally delimited to single problem solving with low flexibility. Connectionist AI is based on machine learning from experience. This learning is not 100% predictable because it is based on statistics. As both approaches complement each other in their weaknesses and strengths current AI research is searching for a coupled approach (Bolander, 2019). This third probability of a coupled approach to create artificial intelligence was also already proposed by Alan Turing with a program using logic, probabilities and learning (Muggleton, 2014).
It becomes clear that even fifty years ago Alan Turing's ideas had a big influence on the development paths of AI (Muggleton, 2014). But does this also apply to the discipline's tendency towards anthropomorphism? To further analyze this question the next chapter will present how anthropomorphism is used in AI research and give some theoretical explanations on why it is used.
3 Anthropomorphism in Artificial Intelligence
3.1 A brief psychological theory on anthropomorphism
According to the oxford dictionary anthropomorphism is "the practice of treating gods, animals or objects as if they had human qualitites" (Oxford, 2020). In an article on anthropomorphism by Epley the likelihood of anthropomorphism is traced back to three different psychological determinants: (1) the accessibility and applicability of anthropocentric knowledge, (2) the motivation to understand and explain the behavior of other agents and (3) the desire for social contact and affiliation (Epley, Waytz, & Cacioppo, 2007). All three determinants can possibly be seen within the field of AI especially in the public sector. This is a result of intended anthropomorphism but also of poor scientific communication (Salles, Evers, & Farisco, 2020). The essay will focus on the professional's and AI researcher's tendency to use anthropomorphisms, since this is where the structural embeddedness of this tendency lies and is partly reflected by public actors. In the case of developers and researchers the tendency to anthropomorphize can be traced back to the psychological determinant of the motivation to understand and explain behavior (Salles, Evers, & Farisco, 2020).
[...]
- Quote paper
- Tim Mauch (Author), 2020, Artificial Intelligence and Anthropomorphism. Does Alan Turings Imitation Game Enhance Anthropomorphism in AI Research?, Munich, GRIN Verlag, https://www.grin.com/document/1006412
Publish now - it's free
Comments