Future of Work. How Artificial Intelligence will impact the Future Workplace

Perceptions of SMEs and a Roadmap to successful Implementation

Master's Thesis, 2019

111 Pages, Grade: 1.1


Table of Contents

II. List of Illustration

III . ListofTables

IV. List of Figures

V. List of Abbreviations

1 Introduction
1.1 Artificial Intelligence - Why Now?
1.2 Objective and Research Questions
1.3 Structure and Methodological Approach

2 Drivers and Megatrends ofthe Fourth Industrial Revolution and Future Workplaces

3 Artificial Intelligence
3.1 Definition Approaches of Artificial Intelligence
3.2 History of Artificial Intelligence
3.3 Artificial Narrow, General and Super Intelligence
3.4 AI Capabilities and its Sub-Technologies
3.4.1 Capabilities of Artificial Intelligence
3.4.2 AI Sub-Technologies
3.5 Excursus: Machine Learning, Neural Networks and Deep Learning
3.5.1 AI, ML, NN and DL in Context
3.5.2 Machine Learning - Origins and Definition
3.5.3 Machine Learning Types
3.5.4 Neural Networks and Deep Learning
3.6 AI Capabilities - Business Application Domains
3.7 Use Cases of Artificial Intelligence

4 Artificial Intelligence and Future ofWork
4.1 Employmentand Technology: From HistorytoToday
4.2 PerceptionsofArtificial Intelligence
4.3 Augmentation vs. Automation
4.4 Employee Tasks
4.5 Jobs
4.5.1 Stable, new and redundant Jobs
4.5.2 Classification of Jobs - Five Ways of Stepping
4.5.3 New Jobs due to AI: The Missing Middle
4.6 EmployeeSkills
4.6.1 General Skill Changes
4.6.2 Skills related toAI
4.6.3 How to overcome Skill Shortages
4.7 Pros and Cons of new Technologies and Labour Market Effects

5 Empirical Research - Perceptions of SMEs in Saarland
5.1 MethodologyandMethod
5.2 Sampling Method
5.3 Purpose, Design and Structure ofthe Questionnaire
5.4 Hypotheses
5.5 SurveyResults
5.5.1 General Survey and Interviewee Information
5.5.2 Sample Characteristics
5.5.3 Hypothesis 1
5.5.4 Hypothesis 2
5.5.5 Hypothesis 3
5.5.6 Hypothesis 4
5.5.7 Hypothesis5
5.5.8 Hypothesis 6
5.6 SummaryofFindings

6 Recommendations for Companies
6.1 Technology-Roadmap
6.2 People-Roadmap

7 ConclusionandOutlook
7.1 Conclusion
7.2 Outlook for Future Research

List of References

Appendix A: AI Knowledge Map

Appendix B: Application of Five Ways of Stepping Framework

Appendix C: Overview of Interview Partners

Appendix D: Exemplary Fields ofAI application (Questionnaire)

Appendix E: Questionnaire

Appendix F: Individual Ranking ofSkills

Appendix G: Artificial Intelligence Implementation Canvas

Appendix H: Employee Persona


This master thesis gives an overview of the topic Artificial Intelligence (AI) and analyses respective effects on jobs, related employee tasks and skills. The practical empirical research examines the perceptions of AI in Saarland SMEs focusing on the actual use and their opinion ofAI changing the workplace.

Qualitative interviews with SMEs in Saarland was compared to existing surveys done concerning AI. 20 interviews have been conducted in total and data from the surveys was analysed with an explaining methodology as a confirmatory research approach.

Results of the empirical research show a lack of knowledge in both AI organizational implementation and the evaluation of existing job profiles impacted by AI. A two-pillar roadmap, consisting of Technology and People, is developed by taking existing evaluation models into account and developing an own AI implementation Canvas.

This thesis is a contribution to the currently rarely discussed topic ofAI in Saarland.

II. List of Illustration

Illustration1: Artificial Intelligence - Why now?

Illustration2: AI Practitioners in Germany - Comparison of Federal States

Illustration3: Future ofWork Megatrends

Illustration4: The Turing Test

Illustration5: Major Milestones in AI Development

Illustration6: Stages of Artificial Intelligence

Illustration7: AI Capabilities

Illustration8: Overview ofAI Sub-Technologies

Illustration9: AI, ML, NNs, DL

Illustration10: Traditional Programs compared toML

Illustration11: Overview of ML Types

Illustration12: Single NN vs. Deep Learning NN

Illustration13: Three Eras ofAutomation

Illustration14: Automation vs. Augmentation

Illustration15: Task Categorization

Illustration16: Five Ways ofStepping by Davenport and Kirby

Illustration17: The Missing Middle - Human + Machine

Illustration18: OverviewofJob Profiles-Trainers, Explainers, Sustainers

Illustration19: Amplification, Interaction, Embodiment

Illustration20: "Fusion Skills" according to Daugherty and Wilson

Illustration21: Potential Actions forthe Future Workforce

Illustration22: Summary of Own Empirical Research

Illustration23: Artificial Intelligence Implementation Roadmap - Overview

Illustration24: Technology-Roadmap - Overview

Illustration25: AI Implementation Canvas - Overview

Illustration26: People-Roadmap - Overview

III . List of Tables

Table 1: Categorization ofAl Definitions

Table 2: Application Examples of Supervised Learning

Table 3: Learning Methods with Respective Models

Table 4: Examples of Stable, New and Redundant Jobs across Industries

Table 5: Ranking of Skills according to WEF Survey 2018

Table 6: Working Hour Changes across Skills according to McKinsey Survey 2018 ..

Table 7: Advantages and Disadvantages ofAl in the Labour Market

Table 8: Hypotheses of Qualitative Research with respective Survey Questions

Table 9: General Information of Interviewees

Table 10: Industries oflnterviewees

Table 11: Clustering of most threatened Jobs according to empirical Research

Table 12: What Skills will People need in the Future to be successful in their Careers?

Table 13: Ranking of Skills -Nowand Future

Table 14: Comparison of Skill Perceptions to initial Survey

Table 15: Five-Level Autonomy Classification ofJob

IV. List of Figures

Figure 1: PrimaryAI BenefitsforCompanies

Figure 2: Ratio of Human-Machine Working Hours, 2018 vs. 2022 (projected)

Figure 3: Change ofJobs - 2018 vs. 2022 (projected)

Figure 4: Job Positions of Interviewees

Figure 5: Interviewees' Knowledge of Artificial Intelligence

Figure 6: AI Investments

Figure 7: Perceptions ofAI

Figure 8: Opinions about Artificial Super Intelligence (ASI)

Figure 9: Comparison ofOwn Research with MIT Research on Super Intelligence

Figure 10: Anxiety of losing Job due toAI

Figure 11: Comparison ofAnxiety to lose Job due to AI with initial Survey

Figure 12: Empirical Research concerning the Change in Type ofWork

Figure 13: Comparison ofSurveys concerning theAffection ofAI to Employees

Figure 14: Survey Comparison - "How comfortable would you be working with/among robots?"

Figure 15: Comparison ofOwn Research to Initial Survey: Human-Machine Interaction in 2030

V. List of Abbreviations

Abbildung in dieser Leseprobe nicht enthalten

1 Introduction

1.1 Artificial Intelligence - Why Now?

Artificial Intelligence (AI) is not a new topic and has been researched since the 1940s. But, due to technological improvements like increased computer power at reduced costs and the occurrence of Big Data, it is the most promising technology driving digitalization (see Reiss 2019, Web and see Ried 2017, Web, p. 7).

Illustration 1: Artificial Intelligence - Why now?

Abbildung in dieser Leseprobe nicht enthalten

Source: See Ried 2017, Web, p. 7.

One could argue, AI offers benefits in increasing productivity and improving efficiency. On the other side, employees are afraid of being replaced by intelligent systems. What are AI’s current capabilities? Should workers fear being replaced by AI?

The large majority believes that short-term replacement will take place in routine-based and manual work activities. According to current perceptions, tasks that require creativity and empathy respectively require human labour and are therefore not compensable by machines.

AIVA Technologies has proven the opposite. The Luxembourg-based start-up is one of the leading AI music composition companies with its Deep Learning (DL) system “Aiva”, i.e. Artificial Intelligence Virtual Artist. By using the reinforcement learning approach, the deep Neural Network (NN) reads databases of classical music of composers like Bach or Mozart and captures their music concepts. Based on these partitions, Aiva learns its own music concepts. According to the development team, Turing tests were conducted with professionals and none were able to determine the difference between Aiva or human composers (see Kaleagasi 2017, Web).

This is one of various AI examples already established on the market. What was considered as unimaginable a few years ago is developed now. What does that mean for the future workforce? To what extent are companies aware of AI’s development into humanities-based labour?

AI Practices in Saarland - an unknown Research Field

In cooperation with the Federal Ministry of Economics and Energy in Germany, the Federal Ministry of Education and Research set up an AI Practitioners map of Germany. As seen below, Saarland is one of four federal states that do not indicate any AI practitioners at the beginning of this thesis and in June 2019, one AI practitioner has been put into the database. Data for this map has been made available by members of “Plattform Lernende Systeme”, thus data might be missing.

Illustration2: AI Practitioners in Germany - Comparison of Federal States

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Lernende Systeme 2018, Web.

Questions arise if companies in Saarland really use AI systems that little and how they perceive this technology. This is the starting point to carry out empiric research.

1.2 Objective and Research Questions

This thesis' objective is to examine AI from a both technical and business perspective, to identify AI’s impact on the future workplace in terms of skills, jobs and tasks and to generate an opinion piece about these topics from the viewpoint of SMEs in Saarland.

Following research questions are at the core of the master thesis:

- How will AI impact the future workplace?
- Is there a reason for limited application of AI in Saarland as seen in illustration 1?
- How do SMEs of Saarland perceive AI from a practical point of view and the employee preparedness?

1.3 Structure and Methodological Approach

Subdivided into seven chapters, the first and second one give an introduction by showing topic’s relevance, further explaining the goal, methodological approach and respective Drivers and Megatrends ofthe Fourth Industrial Revolution and Future Workplaces research questions. Chapter three is an explanatory chapter of AI from technical and business-related perspective - starting with the technical one, a deep dive into the most promising sub-technology Machine Learning (ML) is explained. To shift the perspective from a technical to a more business-related lense, an AI clustering by Davenport, President’s Distinguished Professor of Information Technology and Management at Babson College, is used to illustrate AI use cases. The fourth chapter starts with a historical consideration of computerization impacts on jobs and a differentiation of automation and augmentation. It then combines AI with the future of work by looking at perceptions of organizations and clustering down the impacts of AI on tasks, jobs and skills.

The thesis’ empirical part is covered by chapter five, presenting results ofthe qualitative survey conducted with SMEs in Saarland. Semi-structured interviews have been conducted with a confirmatory approach that have been compared to existing surveys. Chapter five ends with a summary of findings that directly correlates to chapter six.

The penultimate chapter gives recommendations in terms of a self-created roadmap on how to first think about AI successfully in an organization by considering both the technology- and people-perspective. Chapter seven gives a summary of the most essential findings and outlook for the future workplace.

2 Drivers and Megatrends ofthe Fourth Industrial Revolution and Future Workplaces

Before understanding Artificial Intelligence and its impact on future workplaces, current drivers of the fourth industrial revolution and general future of work megatrends are briefly outlined.

Drivers of the Fourth Industrial Revolution

According to Leonhard, a world-renowned futurist and author, three movements have shaped the Fourth Industrial Revolution within the last decade. The key terms are exponential, combinatorial and recursive (see Leonhard 2017, p. 4). All three drivers are either pushed byAI or intelligent systems have benefited from them.

Exponential growth has been first elaborated in 1965 by Gordon Moore, Co-Founder of Intel. Known as Moore’s Law, he stated that computer processor power doubles every 24 months, cutting costs by half (see IEEE Spectrum 2015, Web and see Moore 1965, p. 115). This doubling effect is still valid and enabled increasing computer efficiency in an unforeseeable pace. To give an example, according to Bokor, professor at UC Berkeley, the power of Apple's iPhone 6 is already roughly one million times higher than a 1975 IBM computer. An entire room for its hardware (see Cheng 2015, Web). Exponential growth has reached a high development status in many science and Drivers and Megatrends ofthe Fourth Industrial Revolution and Future Workplaces technology fields, thus having significant societal and economic impact. Positive impacts in terms of having lots of processor power, which is highly beneficial for further AI development that has been previously hindered due to lacking computer capacity. Questionable impacts in terms of being able to keep up with rapid changes as humans. While people tend to develop in linear or at most gradually thinking patterns, enormous cognitive challenges will be faced to work with new technologies (see S L 2015, Web). The exponential growth of computer processors is the essential basis for AI. It is predicted that AI will grow twice as fast as any other technology and outpace Moore's Law (see McKendrick2018, Web).

Concerning the combinatorial driver, technological developments have been pushed by increased cooperation of different subject areas. Instead of former activity separation, collaboration and interlocked knowledge transfers are the new norm (see Leonhard 2017, pp. 8-10). Combining forces like AI and healthcare, medical imaging is supported by ML algorithms, especially in radiology. Due to self-learning algorithms, AI identifies disease findings faster and more precisely, thus reducing human misinterpretations due to fatigue or external distractions (see Mostaghni & Ross 2018, pp. 24-25).

Recursive concerns the reapplication of rules to an existing product. In recursive programming, a technique recalls itself in a computer program. These programs can be seen today in AI and ML - recursive robot systems are able to reprogram themselves, search for updates and manage their own power supply (see Leonhard 2018, p. 9). Future ofWork - Megatrends The following illustration gives an overview of megatrends influencing the future of work and are briefly explained below.

Illustration3: Future ofWork Megatrends

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see PwC 2018b, Web, p. 7 and see Prising 2016, Web.

Concerning demographic shifts, major influencers are ageing population, declining birth rates and increased life expectancy. According to United Nations Population Fund, total world fertility has fallen from 4.5 children in 1970s to 2.5 children in 2015. Similarly, the global lifecycle has increased from 64.6 years in early 1990s to 70.8 years in 2017 (see Drivers and Megatrends ofthe Fourth Industrial Revolution and Future Workplaces UNFPA 2017, Web). Leading to workforce reduction, this will pose challenges to rethink talent sourcing processes - there are already initiatives to overcome this issue by getting more underrepresented groups like women, young, disabled, migrated and older people back to work (see Prising 2016, Web).

Rapid urbanization will change job supply distribution. It is expected that by 2050, world’s urban population will increase by an extra 2.5 billion, leading to higher job supply of potential employees in cities (see UN Department of Economic and Social Affairs 2018, Web).

Shifts in global economic power means that developing countries are catching up with their increasing working-age population and improved education systems. Apart from that, middle-skilled jobs are more threatened due to automation, resulting in middle-class erosion and increasing social unrest in developed countries (see PwC 2018b, Web, p. 7).

Individual trends change the labour market as well. Formerly, it was common that employees might never change their employer. Due to polarization of employment opportunities, this will change drastically (see International Labour Organization 2018, p. 3). Especially millennials (born between 1981 and 1996) expect a multitude of job experiences and switch directions in their working lifecycle. Their mindset is different - they do not want to be successful in one job, they rather want to be ‘employable’, i.e. developing skills that satisfy them and help climbing up the career ladder at any company. Increasing job switches create challenges for labour markets - companies ask themselves to which extent they train their employees when long-term employment is not guaranteed. Governmental institutions and policymakers have to rethink current social benefit plans (see Prising 2016, Web). Compared to earlier generations, younger people stay longer in education, leading to the challenge to ensure them profitable retention programs (see European Commission 2018, p. 29). Work-life balance is another topic that gets more important for younger generations. They do not see work as the essence of their life. For them, flexible working times, free evenings and a healthy mixture of working and living is essential (see International Monetary Fund 2018, p. 81).

Structural Job Design Changes influences the overall organizational setup and employee mindset. Lifelong learning and more agile corporate structures are desired, leading to less hierarchical company structures and more collaboration between formerly separated departments. Due to the so-called “Gig Economy”, freelancers or temporary workers are the norm (see McKinsey Global Institute 2018, Web, p. IX).

One of the most considered topics for future of work are technological breakthroughs due to AI. Compared to previous industrial revolutions, an unexpected change of pace is forecasted which leads to a rapid transformation in job quantity and quality (see PwC 2018b, Web, p. 7).

This thesis sets a focus on the megatrend of technological breakthroughs in AI and how it changes the job landscape as it is pushed drastically due to exponential, recursive and combinatorial drivers.

3 Artificial Intelligence

It is challenging to describe AI with all its different facets in one common chapter - the following section will give definition approaches, historical developments, capabilities, sub-technologies and a deep dive into NNs with DL algorithms. The technical AI knowledge is then transferred to a business perspective by showing practical use cases of organizational AI implementations.

3.1 Definition Approaches ofArtificial Intelligence

Human beings call themselves homo sapiens, a Latin word for man the wise. Over millennia, humans try to understand the way they think, i.e. how they understand, perceive, predict and which brain processes lie behind. AI goes one step further; the goal here is not only to understand how humans think, but also to replicate it into intelligent systems (see Russell & Norvig 2010, p. 1).

AI is a relevant research area in science and engineering; first experiments were carried out after World War II and the name AI was firstly given at the Dartmouth Conference, which is seen as the AI birth hour, in 1956 by computer scientist John McCarthy (see Russell & Norvig 2010, p. 1 and see Rossi 2016, Web, p. 1 and see Bruun & Duka 2018, p.1).

AI, also called cognitive technologies or intelligent systems, covers a vast number of topics, from general to the specific, resulting in a diverse and universal field. Trying to define AI can be as difficult as defining general terms like “digitalization” - depending on the viewpoint, different explanations exist, creating unclear pictures about the true meaning behind this topic. For AI, this is called “AI effect” - a marked lack of clarity what the term covers and when a system is actually intelligent. Therefore, some experts even argue that current AI technologies do not own a “real” intelligence. Another definition issue comes from a more philosophical point since the term “Intelligence” itself has not yet been clearly defined (see Warner 2007, p. 21). Another reason for neglecting intelligence in AI are the fast-moving changes - what has been very innovative and novel five years ago is now considered to be normal (see World Economic Forum 2018a, Web, p. 10 and see Kaplan & Haenlein 2019, p. 17).

The leading textbook “Artificial Intelligence: A Modern Approach” of Russell and Norvig defines AI along two dimensions, i.e. thinking and acting, differentiating between human and rational approaches - the illustration below shows an overview of definitions.

Table 1: Categorization of AI Definitions

Abbildung in dieser Leseprobe nicht enthalten

Source: See Russell & Norvig 2010, p. 2 and Bellman 1978, p. 3 and Charniak & McDermott 1985, p. 27 and Kurzweil 1990, p. 15 and Poole & Mackworth & Goebel 1998, p. 146.

Concerning the four definition approaches of AI, the ones on top, i.e. Thinking Humanly and Thinking Rationally, focus on research fields of reasoning and thinking processes. The bottom ones, i.e. Acting Humanly and Acting Rationally, concentrate on specific behaviour. Reading the table from the left to the right side, the former puts an emphasis on faithful and successful human performance, whereas the latter one deals with ideal, i.e. rational concept of intelligent behaviour (see Russell & Norvig 2010, p. 2).

All four areas are still explored while the human approach is mainly part of empirical sciences, involving hypothesis and experimental confirmation. The rational one is a combination of mathematics and engineering. The acting humanly approach composes most of AI currently, consisting of e.g. Natural Language Processing (NLP), ML, physical robots and rule-based expert systems (see Russell & Norvig 2010, pp. 2-3).

These definitions have something in common - intelligence is about being able to think, learn and solve complex problems. This is the starting point of defining AI.

3.2 HistoryofArtificial Intelligence

Even though first considerations about AI were already made in the 19th century1, the 1950s are seen as the birth time of AI with two specific occurrences: Alan Turing’s well- known Turing Test from his famous essay “Computer Machinery and Intelligence” in 1950 and the "Summer Research Project on Artificial Intelligence", which took place at Dartmouth College in Hanover (New Hampshire) in 1956 (see Buxmann & Schmidt 2019, p. 3 and see Mainzer2019, p. 10).

Turing Test’s goal was to find an operational definition of intelligence (see Russell & Norvig 2010, p. 2 and see Turing 1950, p. 433). The test was executed with a test person Alice who sat in a room with two computer terminals. One terminal was connected to the machine, the other to the person Bob. Alice typed in questions at both terminals. After five minutes, she had to decide which answers came from Bob and which came from the computer. The machine would have passed the test if it had misled Alice in 30 percent of the cases. Until today, none of the machines have ever succeeded in this test until now - Turing has proven that there are limits for intelligent programs (see Ertel 2008, p. 4). Illustration4: The Turing Test

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Warwick & Shah 2016, p. 992.

The term AI has been first introduced at Dartmouth’s six-week conference in 1956 that has been organized by John McCarthy, inventor of the LISP programming language. Other prominent participants included AI researcher Marvin Minsky (1927-2016), information theoretician Claude Shannon (1916-2001), cognitive psychologist Alan Newell (1927-1992), and Nobel economics winner Herbert Simon (1916-2001). During this conference, participants shared the opinion that it is possible to create an intelligence beyond the human brain (see Buxmann & Schmidt 2019, p. 3 and see Ertel 2008, p. 8). An AI upswing began after the Dartmouth Conference. Computers reached a higher storage capacity and became cheaper. Greatest progress was achieved in artificial NNs - demonstrators, such as Joseph Weizenbaum's ELIZA programme, showed potential in human-machine communication via NLP and was basically a forerunner of chatbots that are ordinary today.

These initial successes led to great enthusiasm, but also to misjudgements and exaggerations. Marvin Minsky declared in 1970: "From three to eight years we will have a machine with the general intelligence of an average human being" (Buxmann & Schmidt 2019, p. 4). In 1957, Herbert Simon already predicted that within the next ten years a computer would discover and prove a chess world champion (see Newell & Simon 1958, p. 5).

These predictions did not come true. The main reason that expectations were not fulfilled was insufficient computing power that would have been required for successful execution. Therefore, the period from 1965 to about 1975 is often referred to as AI winter (see Bibel 2014, p. 99).

In the 1980s, the development of so-called expert systems was pushed forward. Edward Feigenbaum is seen as the father of these systems as a former computer science professor at Stanford University. The principle of expert systems is essentially based on a definition of rules and the development of a knowledge base for a thematically clearly defined problem. The MYCIN system, which was used to support diagnostic and therapeutic decisions for blood infection diseases and meningitis became particularly well known (see Shortliffe et al. 1975, p. 317). Ultimately, however, these systems were unable to assert themselves in spite of their great advance laurels, as rules were too rigid and systems were only able to learn to a limited extent (see Buxmann & Schmidt 2019, p. 6).

In the 1990s, a new AI approach of intelligent systems by Marvin Minsky has its basis in simulation-based analyses. During this time, development in robotics and first successful approaches in artificial NNs took place. Due to parallelly increasing computer capacities, AI experienced a real upswing with the result in 1997, when program “Deep Blue” from IBM won against chess world champion Garry Kasparov (see Hecker et al. 2017, Web, p. 5).

In the early 2000s, digitization triggered a new wave of AI - mobile Internet, social media and improved computer skills offered the possibility to evaluate and interpret large amounts of historical and current data using patterns and generating predictions for recommendations, warnings or decisions (see Hecker et al. 2017, Web, p. 6).

The following events are benchmarks of the last years in succeeding of AI.

- 2011: IBM’s program „Watson“ wins in Jeopardy quiz against two human players (see Hecker et al. 2017, Web, p. 5).
- 2013: Google’s subsidiary “DeepMind Technologies” wins Atari games, outscoring humans in 23 out of 49 total games (see Grüner 2016, Web and see NBC News 2015, Web).
- 2015: Microsoft undercuts human error rate of 5.1 percent by 4.94 percent in image recognition. A couple of days later, Google reported an even lower error rate of4.9 percent (see Scharre 2019, p. 130).
- 2016: Google's „AlphaGo“ wins against South Korean Go champion Lee Se-dol (see Hecker et al. 2017, Web, p. 5).
- 2017: After 20 days of poker, Carnegie Mellon University defeats four of the world's best poker experts with its AI “Libratus” in poker version no-limit Texas Hold'em (see IEEE Spectrum 2017, Web).
- 2018: Google’s “AlphaZero” did first approaches in the area of AGI by self­learning without human intervention and without any data input (see Cassel 2018, Web).

Except for the last milestone, examples have to be considered as Artificial Narrow Intelligence (ANI), whose solutions are limited to certain tasks and do not imitate human intelligence. It is the goal to reach the Artificial General Intelligence (AGI) within the next decades that will have more cognitive features and can be adapted to several activity areas (see Hecker et al. 2017, Web, p. 5).

In 2018, it has been the first time that a program like AlphaZero has almost reached the state of an Artificial General Intelligence. To better understand the difference between ANI, AGI and Artificial Super Intelligence, the following section will explain the differences in more detail.

3.3 Artificial Narrow, General and Super Intelligence

AI research is being conducted since the 1950s - nevertheless, greatest potentials have not been reached yet. The evolution of AI can be separated into stages of Artificial Narrow/Weak Intelligence (ANI), Artificial General/Strong Intelligence (AGI) and Artificial Super Intelligence (ASI).

The evolutionary stages of respective intelligence levels are illustrated below. Authors use different terminologies to explain them which are stated in the illustration as well; apart from that it gives implications, respective examples and current state of research.

Illustration6: Stages of Artificial Intelligence

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Kaplan & Haenlein 2019, p.16 and see Wisskirchen et al. 2017, p.8 and see PwC 2017, Web, p. 2 and see PwC 2018a, Web, p. 8 and see Ryabtseva N.Y. Given, Web, p. 2 and see Strelkova 2017, Web, p. 1.

Artificial Narrow Intelligence (ANI)

The first generation of AI is ANI, also known as assisted intelligence or weak intelligence. It is already implemented into daily applications and solves specific tasks. This state of intelligence cannot yet be compared to human intelligence since it is limited to a specific task that has been programmed, thus simulating intelligence. Examples for ANI are face recognition in pictures on Facebook, speech recognition systems like Siri from Apple and navigation systems like Google Maps (see Kaplan & Haenlein 2019, p. 19 and see Corea 2018, Web and see PwC 2018a, Web, p. 8).

ANI is about developing algorithms for specific, delimited problems and taking advantage of large computer processing powers that can evaluate data faster than human beings can (see Buxmann & Schmidt 2019, pp.6-7 and see Goertzel 2010, p. 19 and see Pennachin & Goertzel 2007, p. 1).

Artificial General Intelligence (AGI)

The second generation is AGI, also called augmented, strong or human-level AI. It is a more sophisticated technology that can be compared to human intelligence; this technology is able to fulfil intellectual tasks and is not limited to a specific responsibility area - these systems would be able to perform tasks in areas different from the one they were initially designed for. AGI thinks abstractly, solves problems, captures complex ideas and learns from previous experiences. It is able to reason in the same way as human and can reprogram itself. This has not yet been implemented successfully into organizations; by implementing this technology, work for a humankind will change significantly since AGI will have the skills to mirror its own targets and choose whether to modify them or not (see Kaplan & Haeinlein 2019, p. 16 and see Perez et al. 2016, Web, p. 6).

Such a system does not yet exist - one close approach to build such a system has been done by DeepMind, a ML company that was acquired by Google for half a billion dollars in 2014. Based on the game “Go” that is known as the world wide’s most difficult and complex game, they programmed a system called “AlphaGo” that was able to beat the world wide’s best player Lee See-dol. Their new software called “AlphaZero” is already able to self-learn bywithout human intervention and data (see Cassel 2018, Web).

ML is boosting the research of this general intelligence with its NNs approach and DL algorithms. This area will be further explained in chapter 3.5.

In November 2018, futurist and author Martin Ford surveyed 23 AI experts about when there will be first AGI applications on the market - responses were very contradictive. Google’s director of engineering, Ray Kurzweil expects that by 2029, there will be a 50:50 chance of AGI being built. Rodney Brooks, co-founder of iRobot, expect it for the year 2200. The average estimate of all experts asked was 2099 - 80 years from now (see Vincent2018, Web and see Ford 2018, p. 528).

Artificial Super Intelligence (ASI)

ASI is also called Autonomous or above human-level AI and is the climax - this technology will outperform people in all disciplines by having its own self-awareness and consciousness. According to the Oxford philosopher Nick Bostrom, Super Intelligence is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” (Müller & Bostrom 2014, p. 1 and see Bostrom 2014, p. 33). This super-humanity would be able to solve problems that can no longer be understood by humans. Impacts of such a technology are not yet conceivable and it is still questionable when and whether this technology will ever reach this stage (see Corea 2018, Web). A corresponding scenario that is predicted since the 1950s is known as the technical singularity scenario, where AI also plays a particular role. Singularity means that machines rapidly improve themselves due to AI and thus accelerate technical progress to such extents that cannot be foreseen anymore (see Kurzweil 2005, pp. 24-25).

Up to the present state of research, only ANI can be implemented into an organization.

3.4 AI Capabilities and its Sub-Technologies 3.4.1 Capabilities ofArtificial Intelligence

Even if current research state solely deals with ANI, business leaders can benefit from knowing its capabilities. It is already known that AI supports humans by automating tasks, leading to productivity gains and savings costs. To understand what AI is actually capable of, the illustration belowgives an overview.

Source: Own illustration, see Russell & Norvig 2010, pp. xiii-xvi and see Van Duin & Bakhshi 2017, Web and see Deloitte 2017a, Web, pp. 4-6 and 49 and Schatsky & Muraskin & Gurumurthy 2015, Web, pp. 118-120 and Corea 2018, Web and Perez et al. 2016, Web, pp. 2 and 16 and see World Economic Forum 2018b, Web, p.11.

Learning: AI is able to learn with data based on historical patterns, experts input and feedback loops. It understands data inputs like texts, voices, pictures and can even draw conclusions based on these inputs. The machine is able to gain understanding of words due to big data and contextual information (see Deloitte 2017a, Web, p. 5).

Communicating: Cognitive technologies can communicate with humans through digital mediums. It is even possible to let them take over front-office tasks and have direct customer contact. A well-known example are chatbots that are installed on websites in order to give customer support about a service or product (see Schatsky & Muraskin & Gurumurthy2015, Web, p. 118).

Reasoning: Intelligent machines are able to reason and draw inferences based on situations. In terms of decision-making, they generate rules from data and apply specific profiles against those rules. This creates a context-driven aware system. By identifying interdependencies and associations between data, the machine can offer deep insights and supports the human decision-making process. With great accuracy, it is possible to combine unlimited variables and data (see Perez et al. 2016, Web, p. 2).

Assessing and Solving: AI can also recognize irregularities in data and generate rules for customization by considering specific profiles and by applying general data to optimize outcomes. It is possible to analyse and solve complex issues.

Predicting: The determination of future probabilities of events is a great strength of intelligent systems - by working with historical data and taking several variables into account, their forecasts are highly valuable. A typical example for predicting are Netflix recommendations that are customized to the viewer’s past preferences (see Deloitte 2017a, Web, p. 6).

Sensing: Due to the improved processor efficiency and working storage, AI could be further improved in research; with this hardware, it is now capable of processing massive amounts of structured and unstructured data which are constantly changing (see World Economic Forum 2018b, Web, p. 11).

3.4.2 AI Sub-Technologies

AI covers a very broad and complex range oftechnologies, making it difficult to find one topic overview illustration (a first try of placing all sub-fields in one illustration can be found in Appendix A). The today’s AI key technologies are described in this section (see

Russell & Norvig 2010, pp. 2-3 and see Davenport 2018, p. 11).

Illustration 8: Overview ofAI Sub-Technologies

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Russell & Norvig 2010, pp. 2-3 and see Davenport 2018, p.11.

Machine Learning is a core technology for many approaches in AI. It automatically fits models to data and is able to learn by trainings. A complex form of ML are Neural Networks (NNs) and Deep Learning (DL) forms that are explained in more detail in section 2.5. NNs use weighted artificial “neurons” to relate inputs to final outputs. DL is a NN with many layers that is based on brain structures.

Physical Robots are already excessively present in factories and warehouses, taking over physical and repetitive activities. With the combination of AI, robots reach a more collaborative approach with humans and became more intelligent by implementing AI tasks into theiroperating systems (see Davenport 2018, p.11).

Robotic Process Automation (RPA) is a technology branch whose affiliation to the term AI is controversial, as it mainly performs structured digital tasks as a human would do. It generally mimics human interactions in information systems and is a starting point when thinking about introducing AI into a business organization. In this thesis, RPA is seen as part ofAI technologies (see Scheer & Feld 2017, p. 3).

Computer Vision makes it possible that a technology can actually “see” objects - a good example is e.g. Facebook with its image recognition technology or the Apple Face ID that recognizes the owner’s face and unlocks a smartphone due to this recognition.

Rule-based Expert Systems is the simplest form ofAI and based on logical if-then rules. It has been the most dominant AI technology in the 1980 and still widely used in organizations. Experts and knowledge engineers form a series of rules to automate tasks (see Grosan & Abraham 2011, p. 149).

Natural Language Processing involves the understanding of human language, including applications like speech recognition, text analysis or real-time translation (see Russell & Norvig 2010, p. 2).

Behind all the sub-technologies, ML plays a fundamental role. Especially with its NNs and DL, it has gained more and more recognition in recent years. According to Erik Brynjolfsson and Andrew McAfee, future researchers of the elite university MIT, it is the most important basic technology of this time (see Brynjolfsson & McAfee 2018, Web). Davenport supports this opinion by saying that it is “(...) the largest activity in AI, and the most sophisticated (...)” (Tata Consultancy Services 2017, p. 83).

3.5 Excursus: Machine Learning, Neural Networks and Deep Learning

3.5.1 AI, ML, NN and DL in Context

ML techniques are the most widely applicable learning functions in the area AI. Within the last years, research and development ofAI put a strong focus on it. The AI sub-field can be clustered in a variety of different approaches - the most prominent and promising one is the usage of NNs with its DL structures. The illustration below shows the relations between these terms.

Illustration 9: AI, ML, NNs, DL

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Wittpahl 2019, p. 10 and see Capgemini 2017, Web and see Lauterbach & Bonime-Blanc2018, p. 22.

In the context of these terms, AI is a general term, whereas ML is a specific sub-field of research. NNs use a specific modelling approach for ML, whereas DL is a class of algorithms for specific kinds of NNs. The next sections will deal with ML and offers a deep dive into NNs and DL algorithms.

3.5.2 Machine Learning - Origins and Definition

How is it feasible to let a computerprogram learn from its experiences and to use these experiences to fulfil a task even better in the future? This is the fundamental question for ML (see Wittpahl 2019, p. 24 and see Mitchell 2010, p. 3).

ML is not a new research field - first insights have been gathered around the 1940s. However, during this time, preconditions for further elaborations on this topic were not sufficient. Following developments and achievements have generated a new momentum that makes ML to one of the most promising areas (see Buxmann & Schmidt 2019, pp. 7-8):

- (Big) Data: Data is important for training algorithms. Nowadays, unimaginable amounts of data are available and are increasing exponentially.
- Increased computing processing power and the resulting memory capacity in combination with decreasing acquisition costs made the field of ML more attractive to work on.
- Due to the uprise of the Internet in combination with open innovation, several free accessible toolkits for ML are available online as open source tools.

Defining ML

ML is the first subtype of AI that uses self-learning computer algorithms that improve automaticallythrough experience (see Ertel 2008, p. 183 and see Kumar2018, Web). The fundamental concept has been set in 1997 by Tom M. Mitchell, former Chair of the ML Department at CMU: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E“ (Mitchell 2010, p. 2).

Translating this concept, it is about the ability of a machine or software to learn certain tasks, trained on the basis of experience (data). It is an approach to achieve AI that can learn from experience, formulating predictions and find correlations in existing data sets (see Buxmann & Schmidt 2019, p. 8 and see Murphy 2012, p. xxviii).

The basic concept of ML is grounded in Data Mining, i.e. extracting and gaining knowledge from data and the possibility of making it available to people in a visual and more understandable way. Data Mining is widely used in Marketing and Customer Relationship Management as they own huge data masses and want to improve their customer experience by analysing customer desires, habits and their journey to a product orservice (see Ertel 2008, pp. 183-184).

The difference between traditional programs and ML is illustrated below. Comparing to former static programs, rules get adapted via experiences by means of feedback-loops to the learnt content (see Wittpahl 2019, p. 24).

Therefore, software developers do not have to codify all their knowledge which leads to a massive paradigm change. As a result, the Polanyi-Paradox (1966) will be overcome, i.e. “We know more than we can tell” (Polanyi 1966, p. 4). From time to time, it is not possible for humans to formulate all facts and ideas, so the same lasts for developers when they want to codify an algorithm (see Buxmann & Schmidt 2019, p. 9). ML supports in overcoming this paradox, developing own insights without human aid.

Illustration 10: Traditional Programs compared to ML

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Wittpahl 2019, p. 25.

This subtype of AI makes the computer able to learn by itself, based on constructing algorithms that can teach themselves via feeding the algorithms with lots of data. Due to the capability to process tons of data, it gives humans interesting and important insights (see Mehendale & Sherin 2018, p. 18).

3.5.3 Machine Learning Types

ML can be divided into three learning types, i.e. supervised learning, unsupervised learning and reinforcement learning (see Wittpahl 2019, p. 9 and see Marsland 2014, p. 6 and see Russel & Norvig 2010, pp. 694-695).

Illustration 8 gives a rough overview and is further explained in the following paragraphs.

Illustration 11: Overview of ML Types

Source: Own illustration, see Wang & Chaovalitwongse & Babuska 2012, p. 729. Supervised learning (“learning with a teacher")

In the case of supervised learning, algorithms get trained with labelled data, i.e. training data is specified and a desired interpretation is given upfront to the system. The system should learn how to map input variables to the output variables by means of training data (see Ertel 2008, p. 281 and see Wittpahl 2019, p. 9). Afterwards, the algorithm will be tested with a test data set.

The overall goal is to find general rules that connect known input data with the desired output for consequently using these rules to create new input and output data. The computer has learnt an algorithm that can be used for future predictions for unknown input and output data (see Wittpahl 2019, pp. 25-26). Examples for supervised learning are shown in the table below.

Table 2: Application Examples of Supervised Learning

Abbildung in dieser Leseprobe nicht enthalten

Source: Own Table, see Buxmann & Schmidt 2019, p. 13 and see Brynjolfsson & McAfee 2018, Web.

Taking a more concrete example of image labelling, an algorithm is trained with several thousands of cat pictures. Labelled means in this context that we give information to the algorithm about the fact that each picture is a cat. After the training, quality of the trained model gets validated by using a test data set with pictures of cats and other animals to see if the system can identify the cats. The actual learning process is therefore based on the training data set, while the trained model is evaluated with the test data (see Buxmann & Schmidt 2019, p. 9 and see Marsland 2014, p. 6 and see Russel & Norvig 2010, p. 694).

In case of supervised learning, possible procedures are either regression or classification methods. The former is based on linear interrelations between two variables and can therefore make predictions for unknown static values. The latter works with several values that are distinguished from each other as classes and individual values are assigned to a specific class in the subsequent prediction (see Wittpahl 2019, p. 26). Unsupervised learning (“learning without a teacher"):

Different to supervised learning, in the case of unsupervised an algorithm gets fed with unlabelled data sets and it will not be told what can be interpreted out of it. The algorithm has to find own independent categorizations. Without known classification and labelling of input data, output is not known before; therefore, a computer cannot get trained, but it has to recognize patterns by itself and interpret them.

The most used procedure is clustering, which is similar to the formerly explained classification. The major difference is that in the case of unsupervised learning clustered classes were built by an algorithm itself (see Wittpahl 2019, pp. 26-29).

Taking again the example of thousands of cat pictures, the machine would not be told what could be seen on them. The algorithm has to find clusters itself. The cat photos are not necessarily categorized according to an animal species but, depending on data situation clusters according to colours (black, brown or white cat) could be a potential alternative (see Buxmann & Schmidt 2019, p. 9 and see Saul & Roweis 2003, p. 120). Reinforcement learning (“learning by strengthening")

This learning type is expected to get the most important technology for automation and robotics (see Kober & Bagnell & Peters 2013, p. 1).

For reinforcement learning, training data is not used. An optimal strategy should be learned fora given problem; this approach is based on a maximizing incentive or reward function. The algorithm gets feedback at a specific point in time and receives either a reward or punishment (see Buxmann & Schmidt 2019, pp. 8-9).

Therefore, a computer immediately learns from experiences, interacting with the environment and it gets rewards when results go into a right direction. The overall goal is to let computer act like a trained animal and let them memorize consequences, i.e. a reward or punishment in orderto maximize its reward (see Wittpahl 2019, p. 29).

Microsoft uses such reinforcement learning to select more appealing headings for articles on the msn.com website. The number of clicks on an article rewards high values; known as a "click-baiting system" (see Buxmann & Schmidt 2019, p. 10 and see Brynjolfsson & McAfee 2018, Web).

The table below gives an overview of the learning types with specific learning tasks and related models that can be used forworking with ML techniques.

Table 3: Learning Methods with Respective Models

Abbildung in dieser Leseprobe nicht enthalten

Source: Own table, see Döbel et al. 2018, p. 10.

The model of artificial NNs is ranged in the learning type “Various” in this table as it can be used in either supervised, unsupervised or reinforcement ML. AI received an immense push by research and development advances due to ML and the better performance of DL algorithms that are part of NNs (see Buxmann & Schmidt 2019, p. 8 and see Hecker et al. 2017, Web, p. 6). The next section will be about NNs with its DL procedures.

3.5.4 Neural Networks and Deep Learning

Methods different from NN want to recreate cognitive processes with the aid of logics or probabilistic reasoning by using mathematics or programming languages. This is not similar to imitating brain activities; it is about modelling complex networks, simulating and building hardware and even comparing them to human activities (see Ertel 2008, p. 241). NNs with DL is the simulation of a human brain with the particularity of autonomous learning. Even if comparability in reality is little, NNs can be seen as a bionics branch of AI (see Buxmann & Schmidt 2019, p. 11 and see Scherk & Pöchhacker-Tröscher & Wagner 2017, Web, p. 16). Bionics considers the decoding of inventions of living nature and their innovative implementation into technology (see Ertel 2008, p. 241).

Humans have 10 to 100 billion nerve cells, making it possible for humans to adapt to various environmental conditions and learn new skills. NNs are based on nodal points, i.e. neurons that are connected via differently weighted conductors. Layers are modelled and arranged one upon the other, similar to the human brain. (see Ertel 2008, p. 241 and see Scherk & Pöchhacker-Tröscher & Wagner 2017, Web, p. 17).

The first big step of NNs took place in 1943 by McCulloch and Pitts article “A logical calculus of the ideas immanent in nervous activity” proposing a mathematical model with neurons as a basic switch element for brains (see Ertel 2008, p. 241 and see McCulloch & Pitts 1943, p. 115).

Today, NNs exist in several business functions. The unique thing about NNs with DL is information saved and processed on different layers. Therefore, a total breakdown of intelligent systems is avoided when several neurons do not work anymore. NNs have robust constructions through a network that distributes data and insights on several layers (see Ertel 2008, p. 277).

A general NN consists of a single hidden layer; DL establishes multiple hidden layers consisting of neurons which is closer related to human brain structures (see Nielsen 2018, Web and Scherk & Pöchhacker-Tröscher & Wagner 2017, Web, p. 17). Due to multi-layered networks, intelligent systems are able to find interdependencies and solutions that have not been found out with former algorithms (see Buxmann & Schmidt 2019, p. 12 and see Krizhevsky&Sutskever& Hinton 2012, p. 1).

Illustration 12: Single NN vs. Deep Learning NN

Abbildung in dieser Leseprobe nicht enthalten

Source: Own illustration, see Buxmann & Schmidt 2019, p.14 and see Sterne 2017, p. 86.

The illustration above shows the difference of a general NN and a DL NN. The flow that is illustrated is called Feedforward-Network; more complex structures exist as well, e.g. recurrent networks that move back and forth on layers if needed (see Wittpahl 2019, p. 31).

In a NN, three data types can be identified (see Goodfellow & Bengio & Courville 2016, p. 6 and see Rey & Wender 2018, p. 13):

- Input units are data sets that are used as a starting position.
- Output units are resulting insights issued after running through the inner layers.
- Hidden units are owned by inner layers of a NN between input and output. These can be arranged in several layers one behind the other. This is the area where the “learning” takes place. The networks learn through changes in the weighting between the nodes.

Input units get weighted beforehand and the interpreted information gets forwarded from layer to layer as weighted neurons. Therefore, input of a neuron on the next layer is dependent on the output of the former layer (see Buxmann & Schmidt 2019, pp. 13-14). With every processing step, values from the former neuronal level are forwarded to the next stage, therefore a higher level gets more values and insights of former neurons (see Wittpahl 2019, p. 31).

The input a neuron received from former ones depends on the output of the sending neurons and the corresponding weighting. If output i denotes the activity level of a sending neuron i, the input received by a neurony can be expressed with the following Formula (see Buxmann & Schmidt 2019, pp. 13-14):

Abbildung in dieser Leseprobe nicht enthalten

Neurons lying between input and output units are called hidden neurons (see Wittpahl 2019, p. 31). These inner layers bring up an issue, called black box problem. Hidden layers do not give any information about how and why they came to a specific result. As a consequence, humans cannot localize and comprehend output of DL networks; this is easier to understand in models like decision trees or rule-based expert systems (see Ertel 2008, p. 277). Since the derivation of a certain algorithmic result of DL systems is not possible, it is a great challenge to apply NN in sensitive business functions (see Wittpahl 2019, p. 17). One example is the usage of DL structures in recruiting processes. In the case of using AI in the first interview phase by letting applicants sent videos with standardized questions, NN are already able to sort out applicants based on specific criteria. Because of the black box issue, it is not traceable for humans which specific path it followed and if the system accidentally learned an approach that includes discriminating measures, e.g. tofavourmen over women (see Buxmann &Schmidt2019, p. 17).

3.6 AI Capabilities - Business Application Domains

Looking at sub-technologies of AI, especially business leaders that are not experts in this field can get confused and overwhelmed by different implementation approaches. Therefore, Davenport introduced an AI clustering into three specific business application domains as an overview, thus reducing the need to see this topic with a technology lens. The three domains separate AI into process automation, direct human-machine communication and amplification of humans, respectively called cognitive automation, cognitive engagement and cognitive insights (see Davenport 2018, p. 41 and see Deloitte 2017b, Web, p. 3 and see Davenport & Ronanki 2018, Web).

Cognitive Automation focuses on developing deep domain-specific expertise in a working field automating related tasks that mainly consist of structured and repetitive work processes. AI technologies like NLP, ML, and RPA are the basis for automation. A typical example is processing high volumes of data due to rule-based work with NLP algorithms.

Cognitive Engagement with cognitive agents engage with people, be it internal or external stakeholders. By creating value out of data, a company can e.g. offer personalized products or services to customers and increase personalized fitted engagement with NLP and ML. Typical examples include chatbots answering internal HR-related questions from employees or voice recognition systems like Apple’s Siri that answer to voice commands. This is particularly helpful for call centres - due to dissatisfaction from both customer and human agent side in combination with communication trends towards social messaging platforms. A new kind of AI communication paradigm arose in forms of chatbots, an individual, bidirectional and real­time experience takes place nowadays and customers communicate with companies as a friend known as "brand as a friend" concept (see Marketing Resultant GmbH 2018, Web, p. 4-5 and see Gentsch 2018, p. 86-87). Due to NLP applications, AI can support cross-language communication, thus reducing frustrating language barriers and offering a direct translation from customer to agent and vice versa. By means of ML, AI learns from their connected human agents about which answers are right for which questions by tracking the behaviourofa human agent (see Kirkpatrick2017, pp. 18-19).

Chatbots can either work completely autonomously or in cooperation with a human being. Three models are available to choose from (see Gentsch 2019, p. 145):

- Delegation: The machine takes over concrete processes for humans. Humans begin dialogues with customers and hand it over to the bot.
- Escalation: The human takes over a process for a bot. Humans get involved as soon as bots’ responses are not satisfactory.
- Autonomous dialog guidance: Users are guided completely through the dialog by AI bots.

Cognitive Insights is about the creation of deep insights by using structured and unstructured data and finding out relationships / concepts between hidden data masses. With the help of ML, algorithms can give both present and future insights and make predictions by including several external occurrences. One example is the customer propensity modelling that is able to predict what customers are likely to buy at the end of the purchasing process. Cognitive Insights are particularly interesting and helpful for marketing activities of an organization. AI already offers a broad range for improving overall marketing processes, nevertheless, it is still in an early adoption stage (see Canella 2018, Web, p. 2). Current risks and limitations with the implementation of AI in marketing are again significant investment costs up front, the reliance of large and qualitative data sets, and the missing time / knowledge of implementing AI systems into a company (see Thiraviyam 2018, Web, pp. 5-6). Another issue with the usage of AI in marketing is the perception of customers - they feel prosecuted when companies know e.g. theirshopping habits (see eMarketer2017, Web, p. 13).

3.7 Use Cases ofArtificial Intelligence

To have a better understanding of implementing AI into an organization, this section will deal with practical examples, split up into the three business application domains that have been previously described. The use cases are described with the structure of explaining the initial challenge, the solution and the respective results out of the implementation ofAI.

Cognitive Automation

Cognitive automation is strongly linked to RPA- according to a Deloitte Survey in 2017, most of the automated tasks are back-office administrative and financial activities (see Davenport 2018, p. 41 and Deloitte 2017b, p. 6). Another business unit that can greatly benefit from RPA is customer services, especially in terms of call centres. When a company does not want to pursue the cognitive engagement way since customer’s desire to talk to a real person as a customer is still present - cognitive automation can improve the communication process. Dissatisfaction of both call centre agents and customers mainly arises in terms of tedious and sluggish processes, missing documents and untransparent information (see KRYON N.Y. Given, Web, p. 2). KRYON, a RPA implementor company, has already carried out several projects helping companies in the telecommunication, insurance, or banking sector to improve processes by means of RPA (see KRYON N.Y. Given, Web, p. 4).

- Challenge: Prior to RPA usage, the initial challenge for a client of KRYON, a major insurance company, agents needed to gather Know Your Customer (KYC) information from multiple systems. This has led to delayed customer transactions and time-consuming calls; leading to dissatisfaction. The insurance company was required to improve customer service at its call centre.
- Solution: With the help of RPA, human agents can now send and ask for relevant information with KRYON robots from their desktop. The RPA system gathers all necessary information from involved systems and sends it to the customer service representative in a well-structured message.
- Results: Due to RPA implementation, the client was able to cut average call times by 70 percent and to reduce average handling times from ten minutes to three minutes per call. Concerning customer waiting times, these could be reduced from two minutes to 40 seconds with a reduction of 67 percent in relative percentages. Human errors could be eliminated by 100 percent and operating expenses have declined by 20 percent.


1 In 1843, Lady Ada Lovelace as comrade-in-arms and companion in life of the British mathematician and engineer C. Babbage, stated that: “Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent, (see Collier& MacLachlan 1998, p. 70 and Fuegi & Francis 2003, p. 16)

Excerpt out of 111 pages


Future of Work. How Artificial Intelligence will impact the Future Workplace
Perceptions of SMEs and a Roadmap to successful Implementation
University of Applied Sciences Saarbrücken
Catalog Number
ISBN (eBook)
ISBN (Book)
AI, Artificial Intelligence, Machine Learning, Future of Work, Workplace, Jobs, Replacement, New Work, Employment, Skills, Tasks, Human and Machine, HMI, Human, Machine, Collaboration, Industry 4.0, Künstliche Intelligenz, Fear of losing job, Meinungsbild
Quote paper
Aline Hamm (Author), 2019, Future of Work. How Artificial Intelligence will impact the Future Workplace, Munich, GRIN Verlag, https://www.grin.com/document/506762


  • No comments yet.
Look inside the ebook
Title: Future of Work. How Artificial Intelligence will impact the Future Workplace

Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free