Hardwiring Robot Empathy through Generation of Artificial Pain

Conceptualizing Empathy into Adaptive Self-Awareness Framework for Robot


Doctoral Thesis / Dissertation, 2017

221 Pages


Excerpt


Table of Contents

List of Figures

List of Tables

1 Introduction
1.1 Overview of the Study Background
1.2 Current Issues
1.3 Description of Proposed Approach
1.4 Brief Description of Experiments
1.5 Contributions and Significance
1.6 Future Development
1.7 Structure of the Book

2 Robot Planning and Robot Cognition
2.1 Motion Planning
2.1.1 Stimulus-based Planning
2.1.2 Reasoning-based Planning
2.2 Robot Cognition
2.2.1 Discussion on Theories ofMind
2.2.2 Self-Awareness
2.2.3 Empathy with the Experience of Pain
2.2.4 Robot Empathy

3 Perceptions, Artificial Pain and the Generation of Robot Empathy
3.1 Perceptions
3.1.1 Proprioception and Exteroception
3.2 Faulty Joint Setting Region and Artificial Pain
3.2.1 Proprioceptive Pain (PP)
3.2.2 Inflammatory Pain (IP)
3.2.3 Sensory Malfunction Pain (SMP)
3.3 Pain Level Assignment
3.4 Synthetic Pain Activation in Robots
3.4.1 Simplified Pain Detection (SPD)
3.4.2 Pain Matrix (PM)
3.5 Generation of Robot Empathy
3.5.1 Empathy Analysis

4 Adaptive Self-Awareness Framework for Robots
4.1 Overview of Adaptive Self-Awareness Framework for Robots
4.1.1 Consciousness Direction
4.1.2 Synthetic Pain Description
4.1.3 Robot Mind
4.1.4 Database
4.1.5 Atomic Actions
4.2 Reasoning Mechanism
4.2.1 Pattern Data Acquisition
4.2.2 Causal Reasoning

5 Integration and Implementation
5.1 Hardware Description
5.2 Experiment
5.2.1 Non-empathic Experiment
5.2.2 Empathic Experiment
5.3 Pre-defined Values

6 Results, Analysis and Discussion
6.1 Experiment Overview
6.2 Non-empathy based Experiments
6.2.1 SPD-based Model
6.2.2 Pain Matrix-based Model
6.3 Empathy-based Experiments
6.3.1 SPD Model
6.3.2 Pain Matrix Model

7 Conclusion and Future Work
7.1 Outcomes
7.1.1 Discussion Prompts
7.1.2 Framework Performance
7.1.3 Synthetic Pain Activation
7.1.4 Robot Empathy with Synthetic Pain
7.2 Future Works
7.2.1 Framework Development
7.2.2 Application Domain

References

Appendix A Terminology

Appendix B Documentation

B.1 Dimensions

B.2 Links

B. 3 Joints and Motors

Appendix C Experiment Results Appendix

C. 1 Non-Empathy Appendix

C.1.1 SPD-based Appendix

C.1.2 Pain Matrix-based Appendix

Acknowledgements

I would like to acknowledge and thank Professor Mary-Anne Williams for her great ded­ication, support and supervision. This work is part of the a manifestation of the research collaboration commitment between the UTS Magic Lab and the Electrical Engineering Department UNHAS, namely Indonesia Australia Social Cognitive Robotics (IASCR).

Abstract

The application and use of robots in various areas of human life have been growing since the advent of robotics, and as a result, an increasing number of collaboration tasks are taking place. During a collaboration, humans and robots typically interact through a phys­ical medium and it is likely that as more interactions occur, the possibility for humans to experience pain will increase. It is therefore of primary importance that robots should be capable of understanding the human concept of pain and to react to that understanding. However, studies reveal that the concept of human pain is strongly related to the complex structure of the human nervous system and the concept of Mind which includes concepts of Self-Awareness and Consciousness. Thus, developing an appropriate concept of pain for robots must incorporate the concepts of Self-Awareness and Consciousness.

Our approach is firstly to acquire an appropriate concept of self-awareness as the basis for a robot framework. Secondly, it is to develop an internal capability for a framework for the the internal state of the mechanism by inferring information captured through internal and external perceptions. Thirdly, to conceptualise an artificially created pain classification in the form of synthetic pain which mimics the human concept of pain. Fourthly, to demonstrate the implementation of synthetic pain activation on top of the robot framework, using a reasoning approach in relation to past, current and future predicted conditions. Lastly, our aim is to develop and demonstrate an empathy function as a counter action to the kinds of synthetic pain being generated.

The framework allows robots to develop "self-consciousness" by focusing attention on two primary levels of self, namely subjective and objective. Once implemented, we report the results and provide insights from novel experiments designed to measure whether a robot is capable of shifting its "self-consciousness" using information obtained from exteroceptive and proprioceptive sensory perceptions. We consider whether the framework can support reasoning skills that allow the robot to predict and generate an accurate "pain" acknowledgement, and at the same time, develop appropriate counter responses.

Our experiments are designed to evaluate synthetic pain classification, and the results show that the robot is aware of its internal state through the ability to predict its joint motion and produce appropriate artificial pain generation. The robot is also capable of vill alerting humans when a task will generate artificial pain, and if this fails, the robot can take considerable preventive actions through joint stiffness adjustment. In addition, an experiment scenario also includes the projection of another robot as an object of observation into an observer robot. The main condition to be met for this scenario is that the two robots must share a similar shoulder structure. The results suggest that the observer robot is capable of reacting to any detected synthetic pain occurring in the other robot, which is captured through visual perception. We find that integrating this awareness conceptualisation into a robot architecture will enhance the robot's performance, and at the same time, develop a self-awareness capability which is highly advantageous in human-robot interaction.

Building on this implementation and proof-of-concept work, future research will extend the pain acknowledgement and responses by integrating sensor data across more than one sensor using more sophisticated sensory mechanisms. In addition, the reasoning will be developed further by utilising and comparing the performance with different learning ap­proaches and different collaboration tasks. The evaluation concept also needs to be extended to incorporate human-centred experiments. A major possible application of the proposal to be put forward in this book is in the area of assistive care robots, particularly robots which are used for the purpose of shoulder therapy.

List of Figures

3.1 Synthetic Pain Activation PP and IP

3.2 Synthetic Pain Activation SMP

3.3 Pain Region Assignment

3.4 Pain Matrix Diagram

4.1 Adaptive Robot Self-Awareness Framework (ASAF)

4.2 Robot Awareness Region and CDV

4.3 Robot Mind Structure

4.4 Robot Mind Reasoning Process

5.1 NAO Humanoid Robot (Aldebaran, 2006)

5.2 Non Empathic Experiment

5.3 Initial Pose for Robot Experiments

5.4 Geometrical Transformation

6.1 Offline without Human Interaction Trial 1

6.2 Offline without Human Interaction Trial 2

6.3 Offline without Human Interaction Trial 3

6.4 Offline without Human Interaction Trial 4

6.5 Offline without Human Interaction Trial 5

6.6 Offline with Human Interaction Trial 1

6.7 Offline with Human Interaction Trial 2

6.8 Offline with Human Interaction Trial 3

6.9 Offline with Human Interaction Trial 4

6.10 Offline with Human Interaction Trial 5

6.11 Online without Human Interaction Trial 1

6.12 Online without Human Interaction Trial 2

6.13 Online without Human Interaction Trial 3

6.14 Online without Human Interaction Trial 4

6.15 Online without Human Interaction Trial 5

6.16 Online with Human Interaction Trial 1

6.17 Online with Human Interaction Trial 2

6.18 Online with Human Interaction Trial 3

6.19 Online with Human Interaction Trial 4

6.20 Online with Human Interaction Trial 5

6.21 Prediction Data SPD-based Model Trial 1

6.22 Prediction Data SPD-based Model Trial 2

6.23 Prediction Data SPD-based Model Trial 3

6.24 Prediction Data SPD-based Model Trial 4

6.25 Prediction Data SPD-based Model Trial 5

6.26 Observer Data

6.27 Region Mapping of Joint Data - Upward Experiment

6.28 Region Mapping of Joint Data - Downward Experiment

List of Tables

2.1 Hierarchical Model of Consciousness and Behaviour

2.2 Modalities of Somatosensory Systems (Source: Byrne and Dafny, 1997)

3.1 Artificial Pain for Robots

3.2 SPD Recommendation

3.3 Pain Matrix Functionality

4.1 Elements of the Database

5.1 Pre-Defined Values in the Database

5.2 Awareness State

5.3 Synthetic Pain Experiment

6.1 Experiment Overview

6.2 Offline Pre-Recorded without Physical Interaction Trial 1 to Trial 3

6.3 Offline Pre-Recorded without Physical Interaction Trial 4 and Trial 5 . . . .

6.4 Offline Pre-Recorded with Physical Interaction Trial 1 to Trial 3

6.5 Offline Pre-Recorded with Physical Interaction Trial 4 and Trial 5

6.6 Online without Physical Interaction Trial 1 to Trial 3

6.7 Online without Physical Interaction Trial 4 and Trial 5

6.8 Online with Physical Interaction Trial 1 to Trial 3

6.9 Online with Physical Interaction Trial 4 and Trial 5

6.10 Offline without Physical Interaction - Interval Time

6.11 Prediction Error - Offline No Interaction

6.12 Interval Joint Data and Time Offline with Physical Interaction Trial 1 to Trial 3

6.13 Interval Joint Data and Time Offline with Physical Interaction Trial 4 and Trial 5

6.14 Prediction Error - Offline Physical Interaction Trial 1

6.15 Prediction Error - Offline Physical Interaction Trial 2

6.16 Prediction Error - Offline Physical Interaction Trial 3

6.17 Prediction Error - Offline Physical Interaction Trial 4

6.18 Prediction Error - Offline Physical Interaction Trial 5

6.19 Prediction Error - Online without Physical Interaction

6.20 Prediction Error - Online without Physical Interaction Trial 1

6.21 Prediction Error - Online without Physical Interaction Trial 2

6.22 Prediction Error - Online without Physical Interaction Trial 3

6.23 Prediction Error - Online without Physical Interaction Trial 4

6.24 Prediction Error - Online without Physical Interaction Trial 5

6.25 Prediction Error - Online with Physical Interaction Trial 1

6.26 Prediction Error - Online with Physical Interaction Trial 2

6.27 Prediction Error - Online with Physical Interaction Trial 3

6.28 Prediction Error - Online with Physical Interaction Trial 4

6.29 Prediction Error - Online with Physical Interaction Trial 5

6.30 State of Awareness

6.31 Internal States after Reasoning Process

6.32 Joint Data and Prediction Data SPD-based Model Trial 1

6.33 Prediction Error SPD-based Model Trial 1

6.34 SPD Initial State Trial 1

6.35 SPD Pain Activation Trial 1

6.36 Robot Mind Recommendation Trial 1

6.37 Joint Data and Prediction Data SPD-based Model Trial 2

6.38 Prediction Error SPD-based Model Trial 2

6.39 SPD Initial State Trial 2

6.40 SPD Pain Activation Trial 2

6.41 Robot Mind Recommendation Trial 2

6.42 Joint Data and Prediction Data SPD-based Model Trial 3

6.43 Prediction Error SPD-based Model Trial 3

6.44 SPD Initial State Trial 3

6.45 SPD Pain Activation Trial 3

6.46 Robot Mind Recommendation Trial 3

6.47 Joint Data and Prediction Data SPD-based Model Trial 4

6.48 Prediction Error SPD-based Model Trial 4

6.49 SPD Initial State Trial 4

6.50 SPD Pain Activation Trial 4

6.51 Robot Mind Recommendation Trial 4

6.52 Joint Data and Prediction Data SPD-based Model Trial 5

6.53 Prediction Error SPD-based Model Trial 5

6.54 SPD Initial State Trial 5

6.55 SPD Pain Activation Trial 5

6.56 Robot Mind Recommendation Trial 5

6.57 SPD Pain Activation - Average

6.58 Robot Mind Recommendations

6.59 Upward Hand Movement Direction

6.60 Downward Hand Movement Direction

6.61 Upward Hand Movement Prediction

6.62 Belief State During Non-Empathy Experiment Using Pain Matrix Model

6.63 Pain Activation During Non-Empathy Experiment Using Pain Matrix Model

6.64 Pain Matrix Output During Non-Empathy Experiment

6.65 Goals - Intentions During Non-Empathy Experiment Using Pain Matrix Model

6.66 Faulty Joint Regions

6.67 Observer Data with SPD Model in Empathy Experiments

6.68 Belief State of the Observer in SPD Model

6.69 Observer and Mediator Data During Upward Experiment

6.70 Observer and Mediator Data During Downward Experiment

6.71 SPD Recommendations - Upward Experiment

6.72 SPD Recommendations - Downward Experiment

6.73 Goals and Intentions - Upward Experiment

6.74 Goals and Intentions - Downward Experiment

6.75 Observer Data with Pain Matrix Model

6.76 Belief State During Upward Experiment

6.77 Belief State During Downward Experiment

6.78 Belief State Recommendation During Upward Experiment

6.79 Belief State Recommendation During Downward Experiment

6.80 Pain Matrix Activation with Current Data - Upward Experiment

6.81 Pain Matrix Activation with Prediction Data - Upward Experiment

6.82 Goals and Intentions of Observer During Upward Experiment

6.83 Goals and Intentions of Observer During Downward Experiment

B.1 Body Dimensions

B.2 Link and Axis Definitions

B.3 Head Definition

B.4 Arm Definition

B.5 Leg Definition

B.6 Head Joints

B.7 Left Arm Joints

B.8 Right Arm Joints

B.9 Pelvis Joints

B.10 Left Leg Joints

B.11 Right Leg Joints

B.12 Motors and Speed Ratio

B.13 Head and Arms

B.14 Hands and Legs

B.15 Camera Resolution

B.16 Camera Position

B.17 Joint Sensor and Processor

B. 18 Microphone and Loudspeaker

C.1 Experiment Overview-Appendix

C.2 Offline without Human Interaction Trial 1 with Prediction Data

C.3 Offline without Human Interaction Trial 2 with Prediction Data

C.4 Offline without Human Interaction Trial 3 with Prediction Data

C.5 Offline without Human Interaction Trial 4 with Prediction Data

C.6 Offline without Human Interaction Trial 5 with Prediction Data

C.7 Offline with Human Interaction Trial 1 with Prediction Data

C.8 Offline with Human Interaction Trial 2 with Prediction Data

C.9 Offline with Human Interaction Trial 3 with Prediction Data

C.10 Offline with Human Interaction Trial 4 with Prediction Data

C.11 Offline with Human Interaction Trial 4 with Prediction Data

C.12 Online without Human Interaction Trial 1 with Prediction Data

C.13 Online without Human Interaction Trial 2 with Prediction Data

C.14 Online without Human Interaction Trial 3 with Prediction Data

C.15 Online without Human Interaction Trial 4 with Prediction Data

C.16 Online without Human Interaction Trial 5 with Prediction Data

C.17 Online with Human Interaction Trial 1 with Prediction Data

C.18 Online with Human Interaction Trial 2 with Prediction Data

C.19 Online with Human Interaction Trial 3 with Prediction Data

C.20 Online with Human Interaction Trial 4 with Prediction Data

C.21 Online with Human Interaction Trial 5 with Prediction Data

C.22 Pain Matrix Without Human Interaction Appendix

C.23 Pain Matrix Without Human Interaction Incoming Belief Appendix

C.24 Pain Matrix Without Human Interaction SPD Recommendation

C.25 Pain Matrix Without Human Interaction SPD Goals

Chapter 1 Introduction

This chapter presents an overview of the background to the study followed by the currently identified issues in the field of human-robot interactions and related fields. The chapter then provides a brief introduction to the proposed means of addressing these issues, together with the experimental setup, followed by the analysis and outcomes of the findings. The significance and contribution of the work are given, together with a short description of future related work, followed by the overall structure of the book

1.1 Overview of the Study Background

As the number of robots applications in various areas of human life increases, it is inevitable that more collaborative tasks will take place. During an interaction, humans and robots commonly utilise a physical medium to engage, and the more physical the interaction is, the greater the possibility that robots will cause humans to experience pain. This possibility may arise from human fatigue, robot failure, the working environment or other contingencies that may contribute to accidents. For instance, take the scenario in which robots and humans work together to lift a heavy cinder block. Humans may experience fatigue due to constraints placed on certain body muscles, and over time, this muscle constraint may extend beyond its limit. An overload constraint on muscle degrades the muscle strength and in time introduces damage to internal tissue, leading to the experience of pain. Humans occasionally communicate this internal state verbally or through facial expression. It is of primary importance for robots to consider these sophisticated social cues, capture them and translate them into useful information. Robots can then provide appropriate counter-responses that will prevent humans from experiencing an increase in the severity of pain. Furthermore, robots may play a significant role in anticipating and preventing work accidents from happening.

Having the capability to acknowledge pain and develop appropriate counter responses to the pain experience by the human peer will improve the success of the collaboration. Failure to acknowledge this important human social cue may cause the quality of the interaction to deteriorate and negatively affect the acceptance of future robot applications in the human environment.

1.2 Current Issues

Literature studies show that there are a considerable number of works that have investigated the emergence of robot cognition and have proposed concepts of the creation of conscious robots. However, there are very few studies that acknowledge pain and those studies only use the terminology to refer to robot hardware failure without real conceptualisation of pain. The studies do not correlate the importance of evolving a concept of pain within the robot framework with developing reactions in response to the identified pain. At lower levels of perception, robots rely only on their proprioceptive and exteroceptive sensors, which are limited to building their external and internal representations. Not all robots have uniform sensory and body mechanisms, which consequently, it affects the quality of pain information retrieval and processing. In contrast, humans have a rich and complex sensory system which allows robust pain recognition and the generation of empathic responses. Studies reveal that concepts of self-awareness, pain identification and empathy with pain are strongly attached to the cognitive aspect of humans, who have vast and complex nerve mechanisms (Goubert et al., 2005; Hsu et al., 2010; Lamm et al., 2011; Steen and Haugli, 2001).

These factors present huge challenges to the notion of developing robots with social skills that can recognise human pain and develop empathic responses. Thus, it is of key importance to develop an appropriate concept of self and pain to incorporate in a robot’s framework that will allow the development of human pain recognition.

1.3 Description of Proposed Approach

There are five main objectives of this work. The first is to develop an appropriate concept of self-awareness as the basis of a robot framework. The proposed robot self-awareness framework is implemented on robot cognition, which focuses attention on the two primary levels of self, namely subjectivity and objectivity, derived from the human concept of self proposed by Lewis (1991). It should be pointed out that robot cognition in this work refers to the change in the focus of attention between these levels, and does not necessarily refer to ‘human consciousness’. The second is to develop the internal state of the mechanism over time by inferring information captured through internal and external perceptions. The construction of internal process is based on current and future predicted states of the robot that are captured through the robot’s proprioceptive perception. When an interaction takes place, the information captured by the robot’s exteroceptive perception is also used to determine the internal state. The third is to conceptualise artificial pain for the robot through a set of synthetic pain categories, mimicking human conceptualisation of pain. Fault detection provides the stimulus function and defines classified magnitude values which constitute the generation of artificial pain, which is recorded in a dictionary of synthetic pain. The fourth is to demonstrate the generation of synthetic pain through a reasoning process of the robot’s internal state with respect to the current and predicted robot information captured from proprioceptive perception and the aim of the overall task. The final objective is to develop an appropriate counter-response, mimicking the empathy function, to the generated synthetic pain experienced by the robot.

To briefly describe how the robot mind functions: the framework develops a planning scheme by reasoning the correlation of the robot’s current internal states with the robot’s belief, desire and intention framework. The robot framework determines the type of synthetic pain to be generated, which the robot experiences. Whenever the pain intensity increases, the framework switches its attention to the subjective level, giving priority to the generation of empathy responses to the synthetic pain and disregarding the objective level of the task. In other words, the robot framework manifests the concept of self by actively monitoring its internal states and external world, while awareness is implemented by shifting the focus of attention to either the subjective or the objective level. At the same time, the reasoning process analyses the information captured by the robot’s perceptions with respect to the dictionary of synthetic pain embedded in the framework.

Embedding this ability into the robot’s mechanism will enhance the robot’s understanding of pain, which will be a useful stepping stone in developing the robot’s social skills for recognising human pain. This ability will allow robots to work robustly to understand human expressions during collaborative tasks, particularly when the interaction might lead to painful experiences. This framework will equip the robot with the ability to reconfigure its focus of attention during collaboration, while actively monitoring the condition of its internal state. At the same time, the robot will be capable of generating appropriate synthetic pain and generating associated empathic responses. These empathic responses are designed to prevent robots from suffering catastrophic hardware failure, which is equivalent to an increase in the intensity of the pain level.

1.4 Brief Description of Experiments

Two types of experiment are designed to demonstrate the performance of the robot framework. The first involves one robot and a human partner interacting with each other in a hand pushing task which produces a sequence of arm joint motion data. This type has two scenarios, namely, offline and online scenarios. In the offline scenario, two experiments are carried out in which the first stage is dedicated to recording the arm joint motion data, which will be going to be stored in a database. In the second stage, the data are taken from the database and fed into the robot's mind in the second stage (i.e., as a simulation in the robot's mind). In the online scenario, the data are obtained directly from the hand pushing task and fed to the robot’s mind for further processing.

The second type of experiment involves two robots and a human partner. An observer robot is assigned a task to observe another robot, acting as a mediator robot, which is involved in an interaction with the human partner. There are two stages in this experiment: stage one serves as an initiation or calibration stage, and stage two is the interaction stage. The initiation stage sets the awareness region of the mind of the observer robot and the joint restriction regions for both robots that should be avoided. These joint restriction regions contain robot joint position values which correspond to the faulty joint settings. This stage is also dedicated to calibrating the camera position of the observer robot towards the right arm position of the mediator robot. A red circular shape attached to the back of the right hand of the mediator robot is used as a marker throughout the experiments. The second stage comprises two experiments, robot self-reflection and robot empathy. During the self-reflection experiment, both robots are equipped with an awareness framework, with the exception that the mediator robot does not have an activated consciousness direction function. The final experiment applies the same settings, with the addition of the activation of counter-response actions that simulate the function of the empathy response.

1.5 Contributions and Significance

There are a minimum of four contributions identified by this study:

1. The conceptualisation of robot self-awareness by shifting the focus of attention between two levels of self, namely subjective and objective.
2. A dictionary of artificial robot pain containing a set of synthetic pain class categories.
3. The integration of high reasoning skills within the internal state framework of the robot.
4. The derivation of a novel concept of empathy responses towards synthetic pain for a robot, which is essential for engaging in collaborative tasks with humans.

The significance of the study is that it mostly affects the creation of a cognitive robot and the future coexistence of humans and robots through:

1. Proposing a concept of robot self-awareness, by utilising a high reasoning-based framework.
2. Promoting the importance of self-development within robot internal state representa­tion.
3. Promote a better acceptance of robots in a human-friendly environment, particularly in collaborative tasks.

1.6 Future Development

Four aspects of development will be addressed in respect of current achievements. The first is various sensor utilization which provides complex information for the framework to handle, and the implementation of machine learning approaches to increase the framework reasoning capability. The second addresses awareness regions of the framework and other kinds of synthetic pain, which have not previously been explored. The third highlights the proof- of-concept with the focus on human-centred experiments which serve as task performance assessment. The assessment sets a predefined scenario of human-robot interaction, and human volunteers are involved in assessing the robot's performance. The last aspect is to look into possible real implementation in health care services.

1.7 Structure of the Book

The structure of the book is as follows: Chapter 2 presents a review of the literature that forms the foundation of the work, divided into two main categories. Literature in the the first category discusses motion planning for robots, which focuses on lower level planning and higher level planning. Studies in the second category deal with the metaphysical aspect of the robot, which centres on human cognition, covering the concept of mind, self-awareness, pain and empathy, and the development of the robot empathy concept.

The conceptual foundation of the proposal, which discusses the elements of perception, artificial pain and empathic response, is presented in Chapter 3. The description of perception is divided according to the origin of the sensory data followed by the artificial pain proposal for robots. This chapter also presents how pain levels can be designed, along with the activa­tion procedures and mathematical representation, regardless of whether a simplified method or a more complex approach is used. The concept of robot empathy generation is presented and includes details of how this approach can be implemented, and the mathematical analysis.

Chapter 4 discusses the Adaptive Self-Awareness Framework for Robot, together with several key elements of the framework. The discussion covers a wide range of aspects of each element, including the mathematical representations of retrieved perception data which are arranged into pattern data sequences.

A practical implementation as a proof of concept is highlighted in Chapter 5 which focuses on description of the robot hardware and experimental settings. A humanoid-based robot is used as the experiment platform and a human-robot-interaction as the medium for assessing the technical performance of the robot system.

Chapter 6 provides the outcomes of the experiments conducted in the previous chapter, followed by analysis and discussion of the results. All data are obtained from the module in the framework which is responsible for retaining all incoming data from the sensory mechanisms, pre-recorded synthetic pain values, processed data and output of the robot mind analyses.

Chapter 7 highlights the fundamental achievements of the experiments. It also previews future work, which might include such aspects as more sophisticated data integration from different sensors and possible future implementation in assistive care robots for aiding people with disability.

Chapter 2 Robot Planning and Robot Cognition

This chapter discusses two aspects of robot development covered in the literature in the field of robot planning, particularly in motion planning, and robot cognition, and presents a thorough discussion of the cognitive element of the robot.

2.1 Motion Planning

The discussion of robot motion planning falls into two major categories, stimulus-based planning and reasoning-based planning. Stimulus-based planning concerns planning ap­proaches that originate from the stimulus generated at the low level of robot hardware, while reasoning-based planning focuses on the higher level of data processing.

2.1.1 Stimulus-based Planning

Stimulus-based planning centres on fault detection in robot hardware, which utilises robot proprioceptive and exteroceptive sensors to detect and localise a fault when it occurs. Early studies reported in Elliott Fahlman (1974), Firby (1987), Koditschek (1992) promote the importance of incorporating a failure recovery detection system into robot planning mech­anisms. Firby (1987) proposed the very first planner for a robot, embedded in the reactive action package. The proposal does not give an adequate representation of the robot’s internal state; rather, the planner centres more on the stimuli from the robot’s environment or reactive basis. Further study on failure recovery planning is reported in Tosunoglu (1995); however, this work proposes a planning scheme that relies only on the stimuli received from fault tolerant architecture, which is still a reaction-based approach. A small development was then proposed by Paredis and Khosla (1995). The authors developed a manipulator trajectory plan for the global detection of kinematic fault tolerance which is capable of avoiding violations of secondary kinematics requirements. The planning algorithm is designed to eliminate unfavourable joint positions. However, it is a pre-defined plan and does not include the cur­rent state of the manipulator. Ralph and Pai (1997) proposed fault tolerant motion planning utilising the least constraints approach, which measures motion performance based on given faults obtained from sensor readings. The proposal is processed when a fault is detected and the longevity measure constructs a recovery action based on feasible configurations. Soika (1997) further examined the feasibility of sensor failure, which may impair a robot’s ability to accurately develop a world model of the environment.

In terms of multi-robot cooperation, addressing the issues mentioned above are extremely important. If the internal robot states are not monitored and are disregarded in the process of adjusting robot actions for given task, replanning when faults occur will result in time delay. This situation will eventually raise issues which may deter robot coordination. A multi-robot cooperation in Alami et al. (1998) failed to consider this problem. According to Kaminka and Tambe (1998), any failure in multi-agent cooperation will cause a complex explosion of state space. Planning and coordination will be severely affected by countless possibilities of failure. Studies conducted in Hashimoto et al. (2001) and Jung-Min (2003) focus on the reactive level; the former authors address fault detection and identification, while the latter stresses the need for recovery action after a locked joint failure occurs. Another work reported in Hummel et al. (2006) also focuses on building robot planning on vision sensors, to develop a world model of the robot environment. Fagiolini et al. (2007) in multi-agent systems-based studies proposed a decentralised intrusion approach to identify possible robot misbehaviour by using local information obtained from each robot, and reacted to this information by proposing a new shared cooperation protocol. The physical aspect of human-robot interaction is very important as it concerns safety procedures. A review by De Santis et al. (2008) mentions that safety is a predominant factor that should be considered in building physical human-robot interaction. Monitoring possible hardware failure is made achieveable by the ability of the planning process to integrate the proprioceptive state of robots during interactions. By having updated information, robots are able to accurately configure and adjust their actions in given tasks, and at the same time, to communicate adjustment actions to their human counterparts. Hence, both parties are aware of the progress of the interaction. A study by Scheutz and Kramer (2007) proposed a robust architecture for human-robot interaction. This study signifies the importance of detecting hardware failure and immediately generating post recovery actions. A probabilistic reasoning for robot capabilities was proposed in Jain et al. (2009). The proposal targeted the achievement of capability to anticipate possible failures and generate a set of plausible actions which would have a greater chance of success. Ehrenfeld and Butz (2012) discussed sensor management in the sensor fusion area in relation to fusion detection. Their paper focuses on detecting sensor failure that is due to hardware problems or changes within the environment. A recent study reported by Yi et al. (2012) proposes a geometric planner which focuses on detecting failure and replanning online. The planner functionality is still a reaction-based failure detection.

2.1.2 Reasoning-based Planning

Reasoning-based planning is higher level planning. In this sub-section, we discuss the internal state representation of robots and artificial intelligence planning in general.

Internal State Representation Framework

In higher level planning, robots are considered to be agents, and to represent an agent's internal state requires rationality. One of the most well-recognised approaches to representing an agent’s internal is the Belief (B), Desire (D) and Intention (I) framework. Georgeff et al. (1999) refer to Belief as the agent’s knowledge which contains information about the world, Desire sets the goals that the agent wants to achieve, and Intention represents a set of executable actions. According to Rao and Georgeff (1991), the Belief-Desire-intention (BDI) architecture has been developed since 1987 by the work of Bratman (1987), Bratman et al. (1988) and Georgeff and Pell (1989). The latter’s paper presents the formalised theory of BDI semantics by utilising the Computation Tree Logic form proposed by Emerson and Srinivasan (1988). However, this earlier development of intelligence has received criticism as reported in Kowalski and Sadri (1996) which quotes the argument by Brooks (1991) that an agent needs to react to the changes within that agent’s environment. Kowalski and Sadri (1996) proposed a unification approach which incorporates elements of rationality and reactivity into the agent architecture. Busetta et al. (1999) proposed an intelligent agent framework based on the BDI model JACK, which integrates reactive behaviours such as failure management into its modular-based mechanism. Braubach et al. (2005) claimed that the available BDI platforms tend only to abstract the goal without explicit representation. The authors point out several key points that are not well addressed in BDI architecture planning, which is the explicit mapping of a goal from analysis and design to the implementation stage. The important feature of the proposal is the creation of context which determines whether a goal action is to be adopted or suspended. In the same year, Padgham and Lambrix (2005) formalised the BDI framework with the ability to influence the intentions element of the agent. This extension of the BDI theoretical framework has been implemented in the updated version of the JACK framework. Another development platform, named JASON, presented in Bordini and Hübner (2006), utilises an extended version of agent-oriented logic programming language inspired by the BDI architecture. The paper provides an overview of several features of JASON, one of which is failure handling. However, it does not involve the semantics implementation of a failure recovery system. Still within the same BDI agent framework, Sudeikat et al. (2007) highlighted the validation criterion for BDI-based agents and proposed an evaluation mechanism for asserting the internal action of an agent and the communication of events between the involved agents. The assertion of internal action of an agent relies only on agent performance. Gottifredi et al. (2008, 2010) reported an implementation of BDI architecture on the robot soccer platform. The authors addressed the importance of a recovery failure capability integrated into their BDI-based high level mobile robot control system to tackle adverse situations. Error recovery planning was further investigated by Zhao and Son (2008) who proposed an extended BDI framework. This framework was developed to mitigate improper corrective actions proposed by humans as a result of inconsistency in human cognitive functions resulting from increased automation that introduces complexity into tracking activity. An intelligent agent should have learning capabilities and this is not addressed in the BDI paradigm. Singh et al. (2010) conducted a study, later known to be the earliest study to address the issue, that introduced decision tree-based learning into the BDI framework. This proposal targeted planning selection, which is influenced by the success probability of executed experiences. Any failure is recorded and used to shape the confidence level of the agent within its planning selection. A further study in Singh et al. (2011) integrates dynamic aspects of the environment into the plan-selection learning of a BDI agent. The study demonstrates the implementation of the proposed dynamic confidence measure in plan-selection learning on an embedded battery system control mechanism which monitors changes in battery performance. A recent study carried out by Thangarajah et al. (2011) focuses on the behaviour analysis of the BDI-based framework. This analysis considers the execution, suspension and abortion of goal behaviour which have been addressed in the earlier study reported in Braubach et al. (2005). Cossentino et al. (2012) developed a notation which covers the whole cycle process from analysis to implementation by utilising the Jason interpreter for agent model development. The proposed notation does not address issues of failure recovery; rather, it focuses on the meta-level of agent modelling.

Artificial Intelligent - AI Planning

According to McDermott (1992), robot planning consists of three major elements, namely automatic robot plan generation, the debugging process and planning optimisation. The author points out that constraints play an important role by actively acting as violation monitoring agents during execution. Planning transformation and learning are also crucial elements to include in robot planning. Two of the earliest studies conducted on AI-based task planning, which have become the best-known methods, are reported in Fikes and Nilsson (1972) and Erol et al. (1994). Fikes and Nilsson (1972) proposed the STandford Research Institute Problem Solver (STRIPS) and the study reported in Erol et al. (1994) classifies several different works as the Hierarchical Task Network (HTN), which is decomposition- based. The STRIPS develops its planning linearly with respect to the distance measurement of the current world model from the target. The drawback of this method is that state space explosions occur as more complicated tasks are involved, which is counter-productive. Sacerdoti (1975) argued that regardless of the linearity of execution, the plan itself by nature has a non-linear aspect. The author instead proposed the Nets of Action Hierarchies (NOAH), which are categorised according to the family of HTN-based approaches. The development of a plan in NOAH keeps repeating in the simulation phase in order to generate a more detailed plan, and is followed by a criticising or reassesment phase through processes of reordering or eliminating redundant operations. This work is an advancement of the work on the HACKER model, developed by Sussman (1973), which replaces destructive criticism with constructive criticism to remove the constraints on plan development. Another comparison made by Erol et al. (1996) points out that STRIP-based planners maximise the search of action sequences to produce a world state that satisfies the required conditions. As a result, actions are considered as a set of state transition mapping. HTN planners, in contrast, consider actions as primitive tasks and optimise the network task through task decomposition and conflict resolution. The HTN-style planner NONLIN introduced by Tate (1977) incorporates a task formalism that allows descriptive details to be added during node linking and expansions. In contrast to NOAH, the NONLIN planner has the ability to perform backtracking operations.

Current advancement in AI planning has been directed towards utilisation of proportional methods (Weld, 1999), which generalizes the classical AI planning into three descriptions:

1. Descriptions of initial states
2. Descriptions of goals
3. Descriptions of possible available actions - domain theory

One major AI planning achievement was a proposal made by Blum and Furst (1997), the two-phase GRAPHPLAN planning algorithm, which is a planning method in STRIPS-like domains. The GRAPHPLAN approaches a planning problem by alternating graph expansion and solution extraction. When solution extraction occurs, it performs a backtracking search on the graph until it finds a solution to the problem, otherwise, the cycle of expanding the existing graph is repeated. An extension to this planner was proposed by Koehler et al. (1997), IPP with three main features which differ from the original GRAPHPLAN approach.

1. The input is a form of a pair of sets;
2. The selection procedure for actions takes into consideration that an action can obtain the same goal atom even under different effect conditions;
3. The resolution of conflicts occurs as a result of conditional effects.

In similar STRIP-based domain, Long and Fox (1999) developed a GRAPHPLAN-style planner, STAN, which performs a number of preprocessing analyses on the domain before executing planning processes. The approach firstly observes pre- and post-conditions of actions and represent those actions bit vectors form. Logical operators are applied on these bit vectors in order to check mutual exclusion between pairs of actions which directly interact. Similarly, mutual exclusion (mutex relations) is implemented between facts. A two-layer graph construction (spike) is used to represent the best exploited bit vector, which is useful to avoid unnecessary copying of data and to allow a clear separation on layer-dependent information about a node. The spike construction allow mutex relations recording for efficient mutex testing in indirect interactions. Secondly, there is no advantage in explicit construction of the graph beyond the stage at which the fix point is reached. Overall, the plan graph maintains a wave front which keeps track of all of the goal sets remaining to be considered during search.

A study reported in Kautz and Selman (1992) proposes a SAT-based plan (SATPLAN), which considers planning as satisfiability. The planning is further developed to BLACKBOX planner, which is a unification of SATPLAN and GRAPHPLAN (Kautz and Selman, 1999). The BLACKBOX planner solves a planning problem by translating the plan graph into SAT and applying a general SAT solver to boost the performance. A report in Silva et al. (2000) further develops the GRAPHPLAN-style by translating the plan graph obtained in the first phase of Graphplan into an acyclic Petrinet. Kautz and Selman (2006) later develop SATPLAN04 planner, which shares a unified framework with the old version of SATPLAN. The SATPLAN04 requires several stages when solving planning problems, which can be described as follows:

- Generating planning graph in a graphplan-style;
- Generating a set of clauses which derived from constraints implied by the graph, where each specific instance of an action or fact at a point in time is a proposition;
- Finding a satisfying truth assignment for the formula by utilizing general SAT problem solver;
- Extending the graph if there is no satisfactory solution or it reaches a time-out, other­wise, translating the solution to the SAT problem to a solution to the original planning problem;
- Post processing to remove unnecessary actions. actions.

Another planner such as HSP, which was developed by Bonet and Geffner (1999, 2001), is built based on the ideas of heuristic search. Vidal (2004) proposes a lookahead strategy for extracting information from generated plan in heuristic search domain. A later study by Vidal and Geffner (2006) further develop a branching and pruning method to optimise the heuristic search planning approach. The method allows the reasoning supports, precedences, and causal links involving actions that are not in the plan. Similar author later proposes an approach to automate planning which utilises a Fast-Downward approach as the base planner in exploring a plan tree. This approach estimates which propositions are more likely to be obtained together with some solution plans and uses that estimation as a bias, to sample more relevant intermediates states. A message passing algorithm is applied on the planning graph with landmark support in order to compute the bias (Vidal, 2011).

A different approach proposed in AI planning domain theory utilises heuristic pattern databases (PDBs), for example a study reported in Edelkamp (2000, 2002, 2014). Sievers et al. (2010) further assess that PDBs is lack of efficient implementation as the construction time must be amortized within a single planner run, which requires separate evaluation according to its own state space, set of actions and goal. Hence, it is impossible to perform computation processes at one time and reuse it for multiple inputs. The authors propose and efficient way to implement pattern database heuristics by utilising the Fast Downward planner (Helmert, 2006).

2.2 Robot Cognition

Studies by Franklin and Graesser (1997) and Barandiaran et al. (2009) point out that robots are real world agents, and consequently, the terms ‘robot’and ‘agent’are used interchangeably throughout this book.

Discussions on robot cognition can be traced back to the early development of human mind and consciousness theories. A study by Shear (1995) suggests that there is a direct correspondence between consciousness and awareness. We elaborate on these notions of consciousness and awareness in the following subsections.

2.2.1 Discussion on Theories of Mind

The mind is a collection of concepts that cover aspects of cognition which may or may not refer to an existing single entity or substance (Haikonen, 2012). In other words, the discussion of mind is restricted to perceptions, thoughts, feelings and memories within the framework of self. A large number of studies have addressed this field, and there are several important theories, described as follows:

- Traditional Approach

A number of theoretical approaches identified throughout the history of human mind studies and their key points are described below.

- Cartesian Dualism

This theory, proposed by Rene Descartes, is based on the work of the Greek philosopher Plato (Descartes and Olscamp, 2001). The theory divides existence into two distinct worlds: the body, which is a material world, and the soul, which is an immaterial world. Descartes claimed that the body as a material machine follows the laws of physics, while the mind as an immaterial thing connected to the brain does not follow physical law. However, they interact with each other; the mind is capable of controlling the body but at the same time, the body may influence the mind.

- Property Dualism

This theory counters the Cartesian Dualism theory by suggesting that the world consists of only one physical material but that it has two different kinds of properties, physical and mental. Mental properties may emerge from physical properties, and can change whenever a change occurs in the physical properties, but mental properties may not be present all the time (Haikonen, 2012).

- Identity Theory

This theory is based on the concept of human nerve mechanisms which contain the various actions of nerve cells and their connections which structurise the neural process of the brain. Crick (1994) concluded that the human mind is the result of the behaviour of human nerve cells.

- Modern Studies

Currently, studies of the mind focus on the neural pathways inside the human brain. A vast assembly of neurons, synapses and glial cells in the brain allow subjective experiences to take place (Haikonen, 2012, p.12). Studies on the nerve cells have led to neural network and mirror neuron investigations, and these studies have made a large contribution to the concept of human mind and consciousness.

Consciousness

Since the early studies of consciousness, there has been no unanimous and uniform definition of consciousness. This book highlights a few important studies related to consciousness and robot cognition.

According to Gamez (2008), various terms are used to refer to the studies on conscious­ness theories using computer models to create intelligent machines, and the term ‘machine consciousness’, is typically the standardised terminology used in this field. According to Chalmers (1995), the consciousness problem can be divided into easy problems and hard problems. The easy problems assume that this consciousness phenomenon is directly suscep­tible to standardised explanation methods, which focus on computational or neural-based mechanisms ( a functional explanation). In contrast, hard problems are related to experience, and appear to oppose the approaches used in the easy problems to explain consciousness. The author lists the phenomena associated with the consciousness notion as follows:

- Ability to discriminate, categorise and react to external stimuli
- Information integration by a cognitive system
- Reportability of mental states
- Ability to access one’s own internal state
- Focus of attention
- Deliberate control of behaviour
- Differentiation between wakefulness and sleep

Several studies have attempted to derive machine consciousness by capturing the phenomenal aspects of consciousness. Husserlian phenomenology refers to consciousness giving meaning to an object through feedback processes (Kitamura et al., 2000, p.265). Any system to be considered conscious should be assessed through the nine features of consciousness functions and Kitamura et al. (2000) further developed these nine characteristics form a technical view point as listed below:

1. First person preference: self-preference
2. Feedback process: shift attention until the essence of the object and its connection are obtained
3. Intentionality: directing self towards an object
4. Anticipation: a reference is derived for which objective meaning is to be discarded, and it becomes a belief with the property of an abstract object whenever the anticipation is unsatisfied.
5. Embodiment: related to the consciousness of events, which are the inhibition of perception and body action
6. Certainty: the degree of certainty in each feedback process of understanding
7. Consciousness of others: the belief that others have similar beliefs to our own
8. Emotion: qualia of consciousness which relies on elements of perception and corpore­ality
9. Chaotic performance: an unbalanced situation resulting from randomly generated mental events, which perturb the feedback process and intentionality.

Based on these features, Kitamura (1998) and Kitamura et al. (2000) proposed Consciousness- based Architecture (CBA) which is a software architecture with an evolutionary hierarchy to map animal-like behaviours to symbolic behaviours. These symbolic behaviours are a reduced model of the mind-behaviour relationship of the human. The architecture deploys a five- layer-hierarchy principle, which corresponds to the relationship between consciousness and behaviour. The foundation of the work is built on the principle of the conceptual hierarchical model proposed by Tran (1951, cited in Kitamura, 1998, pp.291-292) which is shown in Table 2.1. In a similar approach, Takeno (2012) proposed a new architecture which originated from

Table 2.1 Hierarchical Model of Consciousness and Behaviour

Abbildung in dieser Leseprobe nicht enthalten

Husserlian phenomenology and Minsky’s idea which postulates that there are higher-level areas that constitute newly evolved areas which supervise the functionality of the old areas.

This new architecture conceptualisation of robot consciousness is achieved through a model- based computation that utilises a complex structure of artificial neural networks, named MoNAD. However, this model only conceptualises the functional consciousness category and studies have shown that understanding conciousness also involves the explanation of feeling, which is known as qualia. It is a physical subjective experience and, since it is a cognitive ability, its study can only be investigated through indirect observation (Haikonen, 2012, p.17).

Gamez (2008) divided studies on machine consciousness into four major categories:

1. External behaviour of machines that are associated with consciousness
2. Cognitive characteristics of machines that are associated with consciousness
3. An architecture of machines that is considered to be associated with human conscious­ness
4. Phenomenal experience of machines which are conscious by themselves

External behaviour, cognitive characteristics and machine architecture, associated with con­sciousness, are areas about which there is no controversy. Phenomenally conscious machines, on the the other hand, that have real phenomenal experiences, have been philosophically problematic. However, Reggia (2013) points out that computational modelling has been scientifically well accepted in consciousness studies involving cognitive science and neu­roscience. Furthermore, computer modelling has successfully captured several conscious forms of information processing in the form of machine simulations, such as neurobiological, cognitive, and behavioural information.

2.2.2 Self-Awareness

In broad terminology, self-awareness can be defined as the state of being alert and knowledge­able about one's personality, including characteristics, feelings and desires (Dictionary.com Online Dictionary, 2015; Merriam-Webster Online Dictionary, 2015; Oxford Online Dictio­nary, 2015). In the field of developmental study, a report by Lewis (1991) postulates that there are two primary elements of self-awareness: subjective self-awareness, i.e. concerning the machinery of the body, and objective self-awareness, i.e. concerning the focus of attention on one’s own self, thoughts, actions and feelings.

In order to be aware, particularly at the body level, sensory perception plays an important role in determining the state of self. This perception involves two different kinds of sensory mechanisms: proprioceptive sensors, which function to monitor the internal state, and exteroceptive sensors, which are used to sense the outside environment. Numerous studies on this sensory perception level have been carried out, and the earliest paper (Siegel, 2001) discusses the dimension aspect of the sensors to be incorporated into the robot. The author states that proprioception allows the robot to sense its personal configuration associated with the surrounding environment. Scassellati (2002) further correlates self-awareness with a framework of beliefs, goals and percepts attributes which refer to a mind theory. Within a goal-directed framework, this mind theory enables a person to understand the actions and expressions of others. The study implements animate and inanimate motion models together with gaze direction identification. A study conducted by Michel et al. (2004) reports the implementation of self-recognition onto a robot mechanism named NICO. The authors present a self-recognition mechanism through a visual field that utilises a learning approach to identify the characteristic time delay inherent in the action-perception loop. The learning observes the robot arm motion through visual detection within a designated time marked by timestamps. Two timestamp markings are initiated; one at the state when movement commands are sent to the arm motors, and one at the state in which no motion is detected. Within the same robot platform and research topic, a study was carried out by Gold and Scassellati (2009) which utilises Bayesian network-based probabilistic approach. The approach compares three models of every object that exists in the visual field of the robot. It then determines whether the object is the robot itself (self model), another object (animate model), or something else (inanimate model) which is possibly caused by sensor noise or a falling object. The likelihood calculation involves the given evidence for each of these objects and models. Within the same stochastic optimisation-based approach, a study conducted by Bongard et al. (2006) proposed a continuous monitoring system to generate the current self-modelling of the robot. The system is capable of generating compensatory behaviours for any morphological alterations due to the impact of damage, the introduction of new tools or environmental changes. On a lesser conceptual level, a study presented in Jaerock and Yoonsuck (2008) proposed prediction of the dynamic internal state of an agent through neuron activities. Each neuron prediction process is handled by a supervised learning predictor that utilises previous activation values for quantification purposes. Novianto and Williams (2009) proposed a robot architecture which focuses on attention as an important aspect of robot self-awareness. The study proposes an architecture in which all requests compete and the winning request takes control of the robot's attention for further precessing. Further research was conducted in Zagal and Lipson (2009), who proposed an approach which minimises physical exploration to achieve resilient adaptation. The minimisation of physical exploration is obtained by implementing a self-reflection method that consists of an innate controller for lower level control and a meta-controller, which governs the innate controller’s activities. Golombek et al. (2010) proposed fault detection based on the self-awareness model. The authors focused on is the internal exchange of the system and the inter-correlative communication between inherent dynamics detected through anomalies generated as a result of environmental changes caused by system failures. At a meta-cognitive level, Birlo and Tapus (2011) presented their preliminary study which reflects a robot’s awareness of object preference based on its available information in the context of human and robot interaction. Their meta-concept regenerates the robot’s attention behaviour based on the robot’s reflection of what the human counterpart is referring to during collaboration. The implementation of self-awareness in other areas, such as health services, has been highlighted in Marier et al. (2013), who proposed an additional method to their earlier study which adapts coverage to variable sensor health by adjusting the cells online. The objective is to achieve equal cost across all cells by adding an algorithm that detects the active state of the vehicless as the mission unfolds. Agha-Mohammad et al. (2014) also proposed a framework that has a health-aware planning capability. The framework is capable of minimising the computational cost of the online forward search by decreasing the dimension of the belief subset of the potential solution that requires an online forward search.

Much of the literature also identifies the lack of a concept of ‘self’. This paper proposes a self-awareness framework for robots which uses a concept of self-awareness as proposed by Lewis (1991). The author postulates that in self-awareness, the concept of self is divided into two levels, subjective awareness and objective awareness. The author shows that human adults have the ability to function at both levels, under certain conditions, and that human adults utilise one level of self-awareness at a time. It can be inferred, however, that these two primary levels of self-awareness coexist and that human adults utilise them by switching the focus of attention between them. The change of direction in robot awareness mimics the principle of attention, which corresponds to processes of mental selection. During switching time, the attention process occurs in three phase sequences: the engagement phase, the sustainment phase and the disengagement phase (Haikonen, 2012). Haikonen (2012) also mentions two types of attention: inner attention and sensory attention. Sensory attention refers specifically to a sensor mechanism, which is designated to monitor a specific part of the body, such as joint attention or visual attention. We utilise this insight, particularly the ability to switch between both levels via attention phases, and through this action, a new framework can be used to change the robot’s awareness from subjective to objective, and vice versa. In this framework, we refer to the physical parts of a robot, such as motors and joints (joint attention) as the subjective element, and the metaphysical aspects of the robot, such as the robot’s representation of its position in relation to the external object or the robot’s success in task performances (inner attention) as the objective elements.

2.2.3 Empathy with the Experience of Pain

This subsection comprehensively reviews literature studies on pain, the correlation of pain with self-awareness, the concept of empathy with pain and the evolving concept of robot empathy.

Pain

Various definitions have appeared throughout the history of human pain, such as the belief in early civilisations that pain is a penalty for sin and the correlation in the first century CE of the four humors and pain in Galen’s theory (Finger, 1994). In the second century CE, Avicenna’s postulate on a sudden change in stimulus for pain or pleasure generation was formulated (Tashani and Johnson, 2010). In modern times, concepts of pain are framed within the theory of functional neuroanatomy and the notion that pain is a somatic sensation transmitted through neural pathways (Perl, 2007). The culmination of the enormous number of works that have explored the concept of pain is the establishment of the following definition of pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of tissue damage or both" (The International Association for the Study of Pain, IASP 1986, cited in Harold Merskey and Bogduk, 1994).

Pain plays a pivotal role in the lives of humans, serving as an early sensory-based detection system and also facilitating the healing of injuries (Chen, 2011). In general, there are four theories of pain perception that have been most influential throughout history, reported in Moayedi and Davis (2013):

1. Specificity Pain Theory. This theory acknowledges that each somatosensory modality has its own dedicated pathway. Somatosensory systems are part of human sensory systems that provide information about objects that exist in the external environment through physical contact with the skin. They also identify the position and motion of body parts through the stimulation of muscles and joints, and at the same time, monitor body temperature (Byrne and Dafny, 1997). Details of the modalities are shown in Table 2.2.
2. Intensity Pain Theory. This theory develops the notion that pain results from the detection of the intense application of stimuli, and occurs when an intensity threshold is reached. Woolf and Ma (2007) proposed a framework for the specificity theory for pain and postulated that noxious stimuli respond to sensory perceptors known as nociceptors. When the intensity of the nociceptive information exceeds the inhibition threshold, the gate switches to open, allowing the activation of pathways and leading

Table 2.2 Modalities of Somatosensory Systems (Source: Byrne and Dafny, 1997)

Abbildung in dieser Leseprobe nicht enthalten

to the generation of the pain experience and associated response behaviours. Studies related to noxious stimulus and nociceptor are presented in Cervero and Merskey (1996) and Moseley and Arntz (2007).

3. Pattern Pain Theory. This theory postulates that somaesthetic sensation takes place as the result of a neural firing pattern of the spatial and temporal peripheral nerves, which are encoded in stimulus type and intensity. Garcia-Larrea and Peyron (2013) provided a review on pain matrices which asserts that painful stimuli activate parts of the brain’s structure.
4. Gate Control Pain Theory. This theory, proposed by Melzack and Wall (1996), pos­tulates that whenever stimulation is applied on the skin, it generates signals that are transmitted through a gate which is controlled by the activity of large and small fibres.

It can be seen that humans possess a complex structure of interconnected networks within the nervous system which permits a number of robust pain mechanisms, from detection, signal activation, and transmission to the inhibition of behaviours. However as Haikonen (2012) points out, artificial pain can be generated on a machine without involving any real feeling of pain. In other words, artificial pain can be evolved by realising the functional aspects of pain which is focused on a technical and practical way on how pain works and operates.

Pain and Self-Awareness Association in Human and Robot

Evolving pain mechanisms as an integrated element of awareness within a robot is a topic that has barely been addressed. One key reason is that self-awareness is a new area of research in human health, so few insights have been translated into the robot realm. A small number of papers have correlated pain with the self-awareness concept in robots and humans. The earliest study, conducted by Steen and Haugli (2001), investigates the correlation of musculoskeletal pain and the increase in self-awareness in people. This study suggests that awareness of the internal relationship between body, mind and emotions enables a person to understand and respond to neurological messages generated by the perception of musculoskeletal pain. A different study carried out by Hsu et al. (2010) investigates the correlation between self-awareness and pain, and proposes that the development of affective self-awareness has a strong association with the severity level of pain. The study utilises a self-reporting assessment mechanism in which reports were collected from people who suffer from fibromyalgia1. Steen and Haugli (2001) used pain acknowledgement to generate self-awareness, while Hsu et al. (2010) focused on the opposite phenomenon, namely, the measurement of affective self-awareness to accurately acknowledge pain. A recent study on self-awareness in robotics in relation to pain has been reported in Koos et al. (2013); this study uses the concept of pain to develop a fast recovery approach from physical robot damage. This work was also used in earlier studies including those of Bongard et al. (2006) and Jain et al. (2009). The study by Koos et al. (2013) is extended in Ackerman (2013) to produce a recovery model which does not require any information about hardware faults or malfunctioning parts. In fact, this approach demonstrates that the recovery model proposal disregards the importance of acquiring self-awareness in detecting pain that results from the faults generated by robot joints.

Empathy

The term empathy was introduced by the psychologist Edward Titchener in 1909 and is a translation of the German word Einfühlung (Stueber, 2014). Notwithstanding the extensive studies on empathy, the definition of this notion has remained ambigous since its introduction, and there is no consensus on how this phenomenon exists. Preston and De Waal (2002) mention that early definitions tend to be abstract and do not include an understanding of the neuronal systems that instantiate empathy. For instance, Goldie (1999) defines empathy as a process whereby the narrative of another person is centrally imagined by projecting that narrative onto oneself. The author specifies that it is necessary for the individual to have the awareness that they are distinct from the other person. It is important to acquire substantial characterization which is derivable and necessary to build an appropriate narrative. Preston and De Waal (2002) discuss discrepancies in the literature and present an overview of the Perception-Action Model (PAM) of empathy, which focuses on how empathy is processed. The PAM states that attending to perception of oneself activates a subjective representation of the other person, which includes the state of the person, the situation, and the object. This subjective representation, if not controlled, creates correlated autonomic and somatic responses. A discussion of the functional architecture of human empathy presented by Decety and Jackson (2004) mentions that empathy is not only about inferring another's emotional state through the cognitive process, known as cognitive empathy, but is also about the recognition and understanding of another’s emotional state, which is known as affective empathy. This is verified by the work in Cuff et al. (2014) in a review of the empathy concept, which discusses differences in the conceptualisation of empathy and proposes a summary of the empathy concept formulation as follows:

Empathy is an emotional response (affective), dependent upon the interaction be­tween trait capacities and state influences. The processes are elicited automatically, and at the same time, shaped by top-down control processes. The resulting emotion is similar to one's perception (directly experienced or imagined). In other words, the understanding (cognitive empathy) of stimulus emotion, with the recognition of the emotion source, is not from one’s own. (Cuff et al., 2014, p.7 )

Two common approaches are used to study human brain function: functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS). After Rizzolatti et al. (1996) introduced the mirror neuron concept, studies on empathy focused on the neural basis of the human brain structure and testing using fMRI and TMS. Discussions on the fMRI approach are presented in Jackson et al. (2005) and Banissy et al. (2012), and on TMS in Avenanti et al. (2006). Krings et al. (1997) mention that both fMRI and TMS are used to map the motor cortex which functions to generate nerve impulses for the initiation of muscular activities. The authors identify that fMRI is specifically utilised for identifying hemodynamic areas, which change during an action, while TMS is used for collecting information about the localisation and density of motoneurons, which are efferent neurons responsible for conveying impulses. De Vignemont and Singer (2006) remark on the common suggestion that shared affective neural networks exist that affect the reflection of emotional feelings of oneself towards others. According to the authors, these networks are automatically triggered whenever the other objects being observed deliver emotional displays. The authors propose two major functions of empathy:

1. Epistemology role. This means that empathy is used as an indicator to detect increased accuracy in the future prediction of the actions of the other people that are being observed. It serves to share emotional networks, which provides the associated motiva­tion for others to perform actions. It also functions as a source of information about environmental properties.
2. Social role. This provides a basis for cooperation and prosocial behaviour motivation, and at the same time, promotes effective social communication.

An experimental work by Lamm et al. (2011) presents more quantitative evidence for the neural structures in the brain, involving the elicitation of pain experiences that originate either from direct experiences or indirect or empathic experiences. The study corroborates the findings in the literature mentioned earlier, that is, that there are shared neural structures and an overlapping activation between direct pain experiences and empathic pain experiences. The results also indicate that these shared neural structures overlap each other.

Empathy with Pain

A characteristic of human empathy is the ability to experience the feelings of others when they suffer (Singer et al., 2004). Singer et al. (2004) conducted an experiment on pain empathy by imaging the neural stimulation of the brain using fMRI. The authors reported that some regions of the brain form a pain-related network, known as a pain matrix. The study confirms that only that region of the pain matrix which is associated with the affective dimension is activated during the expression of an empathic pain experience. It also mentions that an empathic response can still be elicited in the absence of facial expression. These findings were confirmed by Jackson et al. (2005), who investigated perceptions of the pain of others through the medium of photographs. The study’s experiment focused on the hemodynamic2 changes in the cerebral network related to the pain matrix. Goubert et al. (2005) asserted that the following important points need to be considered:® The experience of pain distress captured by the observer may be related to contextual factors, such as an interpersonal relationship. (ii) The level of empathy is affected by bottom-up or stimulus-based processes and by top-down processes or observer knowledge and disposition. The common media used to communicate a distress level in bottom-up processes are social cues such as facial expressions, verbal or non-verbal behaviours and involuntary actions. In top-down processes, personal and interpersonal knowledge may affect the elicited pain response. Observer judgement, which includes beliefs and the context of others’ pain experiences, also affect the empathic experience. (iii) Empathic accuracy, which concerns the problem of correctly estimating risk, plays an important role in the care of people who suffer from pain. If a situation is underestimated, people receive inadequate treatment, while overestimation may elicit a false diagnosis, leading to over-treatment. All these factors may have a devastating impact on a person’s health. A topical review presented in Jackson et al. (2006) reports that mental representation is used as a medium to relate one’s own pain experiences to the perception of the pain of others. The authors remark that experience of one’s pain may be prolonged as one’s self-persepection influences internal pain elicitation regardless of the absence of nociceptive invocation. The authors corroborate the work of Goubert et al. (2005) which suggests that the interpretation of pain representation, captured through pain communication, may not overlap with the exact pain experienced by the other person. This argument reflects the incompleteness of the mapping of the pain of others to oneself. In other words, the perception of one’s own pain in relation to the pain of another shares only a limited level of similarity, and this enables the generation of controlled empathic responses. Loggia et al. (2008) extended this study and proposed that a compassionate interpersonal relationship between oneself and others affects the perception of pain. With the element of compassion, empathy-evoked activation tends to increase the magnitude of the empathic response. Hence, one’s perception of pain in relation to other can be over-estimated regardless of the observation of pain behaviours. Another technique that has been utilised to disclose aspects that underlie human thought and behaviour, such as sensory, cognitive, and motor processes, is the event-related potential (ERP) technique, as described in Kappenman and Luck (2011). This technique, combined with a photograph-based experiment, was used in a study conducted by Meng et al. (2013). The authors investigated whether priming an external heat stimulus on oneself would affect one’s perception in relation to another’s pain. The paper concludes that a shared-representation of a pain model is affected by painful primes through an increased response in reaction time (RT).

2.2.4 Robot Empathy

This subsection reviews the literature that focuses on how the empathic element can be assessed and the possibility of its successful implementation in robot applications.

Empathic Robot Assessment

To justify the extent to which the empathic robot has been successfully achieved, it is impor­tant to establish measurement and assessment criteria. The assessment process can be divided into two major categories: robot-centred experiments and human-centred experiments.

In robot-centred experiments, robot performance is assessed by the robot’s ability to function according to a predetermined empathic criterion, such as the ability to monitor its internal state by identifying body parts, the ability to direct its attention between the two levels of self, subjective awareness and objective awareness, and the ability to communicate through either verbal or pysical gestures (hand movements or facial expression) with its robot peers. Assessment is generally conducted according to machine performance, such as the speed of the robot’s joints, the accuracy and effectiveness of the medium of communication being used, and response times. Gold and Scassellati (2009) carried out an assessment of their robot experimentations by measuring the time basis of the robot arm movements. Specific time allocations were determined to measure the robot’s performance by observation of the robot’s own unreflected arm. Time basis assessment was also used in a study on the self-awareness model proposed by Golombek et al. (2010). This study detects data pattern anomalies by generating training data models for anomaly threshold and training purposes. The approach splits all data into data sequences with a unified time length, and when an error occurs, an amount of time is dedicated to create the error plots for each occurence. In an experiment conducted by Hart and Scassellati (2011), the distance of an end effector of a robot right arm was measured from the predicted position to the recent position of the end effector. A recent study in Anshar and Williams (2015) assessed the performance of a robot awareness framework by measuring the predicted sequence of robot arm joint positions with the joint sensor position reading. The overall performance of the robot framework was reflected in low standard deviation values.

In contrast to the robot-centred experiments, where robot performance is measured ac­cording to proprioceptive and exteroceptive sensor data, human-centred experiments are concerned with task achievement from a human perspective. Humans are involved in assess­ing the performance of the robot within a predefined series of human-robot collaboration tasks. Several empathy measurement techniques are commonly used, such as the Hogan Empathy Scale (HES), updated to the Balanced Empathy Emotional Scale (BEES), the Interpersonal Reaction Index, the Basic Empathy Scale (BES) and the Barrett-Lennard Re­lationship Inventory (BLRI). The HES technique proposed by Hogan (1969) is utilised to measure cognitive elements, and its measurement process has evolved into four key stages. First is the generation of criteria for the rating assessment, followed by the evaluation of those rating criteria. The rating criteria are then used to define the highly empathic and non-empathic groups. Lastly, analyses are carried out to select the items for each scale, which function as discriminative tools between the nominated groups. The BEES was proposed by Mehrabian (1996), and is an updated version of the Questionnaire Measure of Emotional Empathy (QMEE) reported in Mehrabian and Epstein (1972). These techniques are designed to explore two social situations featuring emotional empathy, namely aggression and helping behaviour. QMEE utilises a 33-item scale that contains intercorrelated subscales, mapping the aspects of emotional empathy into a 4-point scale, while BEES utilises 30 items with a 9-point agreement-disagreement scale. In the IRI method, introduced by Davis (1983), the rationality assessment of empathy is constructed according to four subscales. Each subscale correlates to four constructs: Perspective Taking (PT), Fantasy Scale (FS), Empathic Con­cern (ES) and Personal Distress (PD). This method is considered to evaluate both cognitive and emotional empathy. A discussion of these three techniques is presented in Jolliffe and Farrington (2006), in which the authors propose the BES approach. This technique maps the empathy elements into 40 items which are used in the assessment of affective and cognitive empathy. Barrett-Lennard (1986) proposed the BLRI technique, which is particularly used in the study of interpersonal relationships, such as a helping relationship for therapeutic purposes. This technique measures and represents aspects of experience in a relationship on a quantity scale basis.

Current Achievement of Empathy Concept Implementation in the Field of Robotics

A report in Tapus and Mataric (2007) investigated the possible implementation of empathy in socially assistive robotics. The study gave descriptions of a specific empathic modelling, emulation and empathic measurement derived from the literature. The paper corroborates the significance of emulating empathy into robotics, particularly in robot assistive care, as a forward step towards the notion of the integration of robots into the daily lives of humans. A case study by Leite et al. (2011) investigates scenario-dependent user affective states through interaction between children and robots in a chess game. This study was extended by Pereira et al. (2011) and involved two people in a chess game in which a robot functioned as a companion robot to one player and remained neutral against the other player. The robot communicated through facial expression on every movement of the player, whether it was agreed, disagreed or was neutral. It was found that the user with whom the robot behaved empathetically perceived the robot’s companionship as friendly.

An early study that investigated the neurological basis of human empathy in the field of robotics was reported in Pütten et al. (2013). A human observer was shown videos of a human actor treating a human participant, a robot and an inanimate object in affectionate (positive) and violent (negative) ways. fMRI was used to monitor parts of the brain which are active when an empathic response is elicited in humans. An important finding of this study is that in positive interaction in particular, there are no significant differences in the neural activation in the brain of the observer when empathic reactions are stimulated during human-human interaction or during human-robot interactions, whereas in negative situations, neural activation towards humans is higher than it is towards robots. The study was extended in Pütten et al. (2014), which investigates the emotional effect, the neural basis of human empathy towards humans, and the neural basis of generating the notion of human empathy towards robots. It was reported that the participants' reactions included emotional attitudes during positive and negative interactions. During positive interactions, there was no differences in neural activation patterns were found in the human observer's reactions either during empathy towards human experiments or in empathy towards robots. However, during negative interactions, when participants were shown abusive and violent videos, neural activity increased, leading to more emotional distress for the participants and a higher negative empathic concern for humans than for robots.

A new issue has arisen in the literature, which is the emerging notion of empathic care robots. It is reported in Stahl et al. (2014) that such technology will potentially create ethical problems, and there is a need to initiate a new scope of research to identify possible challenges that will need to be addressed.

Chapter 3 Perceptions, Artificial Pain and the Generation of Robot Empathy

This chapter discusses the elements that play a dominant role in artificial pain and the gener­ation of empathic actions. Artificial pain generation is implemented in the pain activation mechanisms that serve as a pain generator. This pain generator precipitates the kinds of synthetic pain associated with the information obtained through the sensory mechanisms. Empathic actions are then generated as counter reactions based on proposals made by the pain generator.

Overall, there are few aspects derived from literature studies in Chapter 2 described as follows.

1. At lower level, the proposal should cover the ability to monitor the internal state of the robot by optimizing information derived from the robot perception. Robot perception as the gateway to obtain information could be derived from proprioceptive sensors (drawing information internally) and exteroceptive sensors (acquiring information from surrounding). These stimulus are used as the main building block for the robot to build and structure plans of actions, including anticipation possible failures.
2. At higher level, the proposal should consider the robot internal state representation in building the planning mechanism. In terms of representation, a possible choice is by looking into the BDI-based representation model, and for the planning itself should include three major elements, which are:

- Automatic robot plan generation
- Debugging process
- Planning optimisation

[...]


1 widespread pain and tenderness in the human body, sometimes accompanied by fatigue, cognitive distur­bance and emotional distress.

2 factors involved in the circulation of blood, including pressure, flow and resistance.

Excerpt out of 221 pages

Details

Title
Hardwiring Robot Empathy through Generation of Artificial Pain
Subtitle
Conceptualizing Empathy into Adaptive Self-Awareness Framework for Robot
College
University of Technology, Sydney
Course
Philosophy Degree
Author
Year
2017
Pages
221
Catalog Number
V498980
ISBN (eBook)
9783346019905
ISBN (Book)
9783346019912
Language
English
Keywords
hardwiring, self-awareness, adaptive, pain, artificial, generation, empathy, robot, framework, conceptualizing
Quote paper
Muh Anshar (Author), 2017, Hardwiring Robot Empathy through Generation of Artificial Pain, Munich, GRIN Verlag, https://www.grin.com/document/498980

Comments

  • No comments yet.
Look inside the ebook
Title: Hardwiring Robot Empathy through Generation of Artificial Pain



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free