This work focuses on a novel instance-based technique called “Native Guide”, that generates counterfactual explanations for time series data classification. It uses nearest neighbour samples from the real data distribution with class change as a foundation. This thesis applies the method on the explanation of electrocardiogram (ECG) classification, a very complex and vital medical field where every single ECG carries unique features. Native Guide for ECGs is explained, examined and expanded by providing necessary background knowledge, amplifying aspects like plausibility, comparing different suitable models to each other and indicating benefits and downsides. Finally, counterfactual explanations for ecg data classification generated by Native Guide are evaluated by cardiologists by means of two expert interviews.
Synchronization of the periodic ECG data was shown to be the most important contribution to the method that enabled the generation of plausible counterfactuals. The experts, who had never seen or used counterfactuals in their work, were interested in this approach and could envision its application within the field when it comes to training junior doctors. In general, AI classifica-tion along with sophisticated proximate counterfactuals indicate success and reliability when it comes to the identification of heart diseases.
Explanations are essential components in the promising fields of artificial intelligence (AI) and machine learning. Deep learning approaches are rising due to their supremacy in terms of accuracy when trained with huge amounts of data. Because of their black-box nature, the predictions are also hard to comprehend, retrace, and trust. Good explanation techniques can help to understand why a system produces a certain prediction and therefore increase trust in the model. Understanding the model is crucial for domains like healthcare, where decisions ultimately affect human life. Studies have shown that counterfactual explanations in particular tend to be more informative and psychologically effective than other methods.
Inhaltsverzeichnis (Table of Contents)
- Introduction
- Topic and Related Work
- Research Approach: Design Science
- Aims and Objectives
- Research Questions
- Thesis Outline and Research Methods
- Background
- Time Series Data
- Basic Understanding of Artificial Intelligence, Machine Learning, and Classification
- Convolutional Neural Network (CNN)
- Explainable Artificial Intelligence (XAI)
- Counterfactual Explanations
- ECG Signal Data
- Openly-accessible ECG Datasets
- ECG200
- ECG5000
- PTB
- PTB-XL
- Native Guide: A Counterfactual Explanation Technique
- Reference Method
- Learn or Load Classifier
- Class-Activation-Map (CAM)
- Finding the Native Guide
- Perturbation
- Investigation and Observation of the Method
- Comparison of Classifiers
- ECG Signal Strength and Wavelength
- Swapped Subsequence-Length
- Data Quantity, Length and Variety
- Different Decision Boundaries
- Experimental Approaches for Optimization
- Normalization and Synchronization
- Swapping Points instead of Subsequences
- Shifted Decision Boundary
- Reference Method
- Evaluation: Expert Interview
- Goal, Structure and Approach
- Expert Background
- Interview Results
- Usage of ECG
- ECG Data Quality
- General Attitude towards Counterfactuals
- Plausibility of Counterfactuals
- Improvement Ideas
- Possible Use-Cases
- Discussion
Zielsetzung und Themenschwerpunkte (Objectives and Key Themes)
This thesis aims to investigate different aspects of Native Guide, a generative counterfactual explanation method for time series data classification. It explores the method's functionality, examines its strengths and weaknesses, and proposes optimizations for improving counterfactual explanations, especially in the context of electrocardiogram (ECG) classification.
- Counterfactual Explainability for Time Series Data
- Native Guide Method and its Application to ECG Classification
- Evaluation of Counterfactual Explanations from a User Perspective
- Optimization Techniques for Improving Counterfactual Plausibility
- Potential Use Cases of Counterfactuals in Cardiology
Zusammenfassung der Kapitel (Chapter Summaries)
- Introduction: Introduces the topic of explainable AI in the context of ECG classification. It outlines the research approach, objectives, and questions addressed in the thesis.
- Background: Provides a comprehensive overview of key concepts related to the research, including time series data, machine learning, convolutional neural networks, explainable AI, counterfactual explanations, and ECG signal data. It also presents a summary of existing ECG datasets.
- Native Guide: A Counterfactual Explanation Technique: Presents the Native Guide method, explaining its technical steps for generating counterfactual explanations. It investigates different aspects of the method, such as the influence of various classifiers and data characteristics.
- Evaluation: Expert Interview: Describes the expert interviews conducted with cardiologists to evaluate the plausibility of counterfactuals generated by Native Guide. It presents insights gained from the experts regarding the practical use of ECG, their opinions on the method, and potential use cases.
- Discussion: Discusses the limitations and challenges of Native Guide, highlighting the need for further research and improvement in areas like data quality, synchronization, and perturbation methods.
Schlüsselwörter (Keywords)
This work focuses on explainable AI, counterfactual explanations, time series data classification, ECG signal data, Native Guide, and expert evaluation. It explores the potential of counterfactuals to enhance the understanding and trust in AI-driven ECG classification systems.
Frequently Asked Questions
What is the "Native Guide" technique?
Native Guide is an instance-based method that generates counterfactual explanations for time series data by using real nearest-neighbor samples.
How are counterfactuals used in ECG classification?
They explain why an AI model predicted a certain heart condition by showing how the ECG signal would need to change to result in a different classification.
Why is synchronization important for ECG counterfactuals?
ECG data is periodic; synchronization ensures that swapped subsequences align correctly, which is crucial for generating plausible explanations for cardiologists.
What did the expert interviews with cardiologists reveal?
Cardiologists found the approach promising, especially for training junior doctors, although they emphasized the need for high data quality and plausibility.
What are the benefits of counterfactual explanations over "black-box" AI?
They increase trust and transparency by making AI predictions retracable and comprehensible, which is vital in life-affecting medical domains.
- Quote paper
- Viktoria Andres (Author), 2021, Generating Counterfactual Explanations for Electrocardiography Classification with Native Guide, Munich, GRIN Verlag, https://www.grin.com/document/1139598