This thesis presents a toolkit of 17 user experience (UX) principles, which are categorized according to their relevance towards Explainable AI (XAI).
The goal of Explainable AI has been widely associated in literature with dimensions of comprehensibility, usefulness, trust, and acceptance. Moreover, authors in academia postulate that research should rather focus on the development of holistic explanation interfaces instead of single visual explanations. Consequently, the focus of XAI research should be more on potential users and their needs, rather than purely technical aspects of XAI methods. Considering these three impediments, the author of this thesis derives the assumption to bring valuable insights from the research area of User Interface (UI) and User Experience design into XAI research. Basically, UX is concerned with the design and evaluation of pragmatic and hedonic aspects of a user’s interaction with a system in some context.
These principles are taken into account in the subsequent prototyping of a custom XAI system called Brain Tumor Assistant (BTA). Here, a pre-trained EfficientNetB0 is used as a Convolutional Neural Network that can divide x-ray images of a human brain into four classes with an overall accuracy of 98%. To generate factual explanations, Local Interpretable Model-agnostic Explanations are subsequently applied as an XAI method. The following evaluation of the BTA is based on the so-called User Experience Questionnaire (UEQ) according to Laugwitz et al. (2008), whereby single items of the questionnaire are adapted to the specific context of XAI. Quantitative data from a study with 50 participants in each control and treatment group is used to present a standardized way of quantifying the dimensions of Usability and UX specifically for XAI systems. Furthermore, through an A/B test, evidence is presented that visual explanations have a significant (α=0.05) positive effect on the dimensions of attractiveness, usefulness, controllability, and trustworthiness. In summary, this thesis proves that explanations in computer vision not only have a significantly positive effect on trustworthiness, but also on other dimensions.
Inhaltsverzeichnis (Table of Contents)
- Introduction
- Theoretical Foundation
- Artificial Intelligence (AI)
- Explainable Artificial Intelligence (XAI)
- Explainability and Related Terms
- Definition
- Taxonomy of XAI Methods
- User Experience (UX)
- Methodology
- Systematic Literature Research (SLR)
- User-centered XAI Design
- Prototyping and Evaluating an XAI Interface
- Deriving Principles of User Experience
- UX Principles in general
- UX Principles according to XAI
- Prototyping an XAI Interface for Computer Vision Tasks
- Phase 1: Context
- Phase 2: User
- Phase 3: Solution
- Evaluating an XAI Interface
- Construction of an XAI-related Questionnaire
- User Experience Questionnaire (UEQ)
- Adapting the UEQ according to XAI
- User Study
- Design and Execution
- General Results
- Results from Quantitative Data Analysis
- Construction of an XAI-related Questionnaire
- Deriving Principles of User Experience
- The role of XAI in enhancing user trust and acceptance of AI-based decision support systems.
- The integration of UX principles in the design and evaluation of XAI interfaces.
- The impact of visual explanations on the usability and user experience of XAI systems.
- The development of a UX-optimized XAI interface for brain tumor detection in radiology.
- The quantification of UX dimensions using the User Experience Questionnaire (UEQ) and its adaptation for XAI.
- Introduction: This chapter sets the scene by discussing the increasing relevance of AI and the need for explainability in AI-based systems. The author highlights the importance of user-centered design and UX principles in the development of XAI systems.
- Theoretical Foundation: This chapter defines key terms like AI, XAI, and UX. It explores the different types of AI models, the need for explainability in complex systems, and various XAI methods, including a taxonomy of XAI methods in computer vision.
- Methodology: This chapter presents the research questions and hypothesis driving the thesis. It outlines the methodology used, including the Systematic Literature Research (SLR) conducted to identify UX principles relevant to XAI and the User-centered XAI Design cycle followed to develop and evaluate the prototype system.
- Prototyping and Evaluating an XAI Interface: This chapter focuses on deriving relevant UX principles from the SLR and categorizing them into the UX pyramid. Subsequently, a prototype XAI system called Brain Tumor Assistant (BTA) is developed according to the UCD process. The BTA leverages a pre-trained EfficientNetB0 CNN for brain tumor classification and uses LIME for generating visual explanations. Finally, a user study is conducted to quantify the UX of the BTA with and without the visual explanations, allowing the author to analyze the effect of explanations on various UX dimensions.
Zielsetzung und Themenschwerpunkte (Objectives and Key Themes)
This thesis explores the impact of Explainable Artificial Intelligence (XAI) on User Experience (UX), specifically in the context of computer vision tasks. It aims to prototype and evaluate a UX-optimized XAI interface for brain tumor detection using a Convolutional Neural Network (CNN) and the Local Interpretable Model-agnostic Explanations (LIME) method. This research investigates the influence of visual explanations on user trust, acceptance, and overall UX, considering both pragmatic and hedonic qualities.
Zusammenfassung der Kapitel (Chapter Summaries)
Schlüsselwörter (Keywords)
This thesis focuses on the interplay between Explainable Artificial Intelligence (XAI), User Experience (UX), and computer vision. Key areas of interest include the design and evaluation of UX-optimized XAI interfaces, particularly in the context of brain tumor detection. This research incorporates concepts like user-centered design, Convolutional Neural Networks (CNNs), Local Interpretable Model-agnostic Explanations (LIME), and the User Experience Questionnaire (UEQ). The study investigates the impact of visual explanations on dimensions of usability, trust, and user satisfaction.
- Arbeit zitieren
- Georg Dedikov (Autor:in), 2023, Explainable AI and User Experience. Prototyping and Evaluating an UX-Optimized XAI Interface in Computer Vision, München, GRIN Verlag, https://www.grin.com/document/1356885