Grin logo
de en es fr
Shop
GRIN Website
Texte veröffentlichen, Rundum-Service genießen
Zur Shop-Startseite › Soziologie - Kultur, Technik, Völker

AI Killing in the Name of Past and Future. Identifying Counterfactual and Hypothetical Narratives Justifying the Decision to Deploy Data-Based Military Technology by the US Military

Titel: AI Killing in the Name of Past and Future. Identifying Counterfactual and Hypothetical Narratives Justifying the Decision to Deploy Data-Based Military Technology by the US Military

Bachelorarbeit , 2024 , 67 Seiten , Note: 1,3

Autor:in: Timotheus Meiß (Autor:in)

Soziologie - Kultur, Technik, Völker
Leseprobe & Details   Blick ins Buch
Zusammenfassung Leseprobe Details

The decision to integrate and deploy data-based enhanced advanced military weapon systems as a military solution by the US military is inspected more closely. In order to narrow the scope, this work will focus on narratives of hypothetical and counterfactual nature, drawing on a cultural scientific theory of decision-making. Whereas hypothetical narratives leverage consequences of decisions, counterfactuals focus on the contingent nature of decisions by highlighting not-selected alternatives within the decision-making process and their potential outcomes. Within the field of data-based technology, such as Artificial Intelligence (AI) and the decision making surrounding the technology, both hypothetical future scenarios and their counterfactual counterparts can be identified. While hypotheticals focus on anticipating future developments of a technology with unclear capabilities, counterfactual narratives are used to fix shortcomings in the past by providing a seemingly universally applicable solution. Both narratives create distorted impressions of the usefulness and prowess of Artificial Intelligence.
Counterfactual and hypothetical narratives justifying the decision by the US military to deploy data-based technology will be examined from a standpoint of cultural-scientific analysis of decision-making. The integration of these technologies into the realm of warfare was initiated under a unique constellation of social and political structures that are intertwined with the decision-making process. Investigating the circumstances in which a decision was made will enable the examination of narratives that justify the decision. The decision to implement a technology should be supported by a strategic necessity. It also ought to be preceded by establishing sufficient infrastructure to and enable the technology to fulfil its intended role. Moreover, narratives justifying the decision are meant to ward off criticism of blindly integrating AI into military applications. By providing exemplary criticism detailing possible consequences of the decision in warfare, the validity of narratives defending the decision is questioned. These referenced phenomena detail how civilians and non-combatants are increasingly put at risk and expose loopholes that could be used to violate International Human Law while avoiding direct responsibility for one’s actions.

Leseprobe


Table of Contents

1. Introduction:

1.1 Structure

1.2 Scope

2. Practices of decision making

3. Historic Context

4. Terminology

5. 1st Counterfactual: fewer civilian casualties

5.1 Reinvention of accuracy

5.2 Dual Use

5.3 System Destruction Warfare

6. 2nd Counterfactual: Safeguard and policy

6.1 International Ban problems

6.2 Human Meaningful Control

7. 3rd counterfactual: No defiance

7.1 Decorporealizing of state power

7.2 Air Power

8. Transition to Hypotheticals

9. 1st Hypothetical: adversarial AI on the battlefield

9.1 Command and Control

9.2 Adversarial AI Development

10. 2nd Hypothetical: long-term commitment

10.1 Shortcomings

10.2 No backing out

11. Conclusion

Research Objectives & Topics

This work examines the narratives—both counterfactual and hypothetical—that justify the integration of data-based weapon systems into the US military's arsenal. By applying a cultural-scientific framework of decision-making, it analyzes how these narratives function to overcome doubts and justify technological adoption despite inherent risks and ethical concerns.

  • The usage of counterfactual narratives to justify AI as a rectification of past failures in the "Global War on Terror."
  • The role of "System Destruction Warfare" and "Dual Use" concepts in reshaping target identification.
  • The critique of existing policy safeguards, such as "Meaningful Human Control," as epistemologically narrow and biased.
  • The transition to hypothetical narratives, specifically the fear of adversarial AI, as a driver for military investment.
  • The examination of how autonomous weapon systems might minimize human agency and potential soldier defiance.

Excerpt from the Book

5.1 Reinvention of accuracy

However, as Lucy Suchman examines, civilian casualties result from an amalgamation of factors, not easily solved by relying on a technological solution. The author highlights the concept of Situational Awareness as the central motive to be enhanced by data-based technology. Rather than relying on the accurate perception of the environment by an individual soldier, data-based technology replaces the biased subjectivity with a sensory network. Nevertheless, this network remains flawed through its innate characteristics. Suchman presents a number of works that criticise the system of drone surveillance as a network of perception, inhibited by its presuppositions of enemy presence and positive identification, as well as decontextualization by silencing the observed realities.

Further, Suchman illustrates image recognition procedures by using Project Maven as an example. This federal project intended to use Googles Cloud infrastructure for video image labelling of drone surveillance. While the model was being trained on archived battlefield footage collected by drones, it required an initial set of 150 000 hand labelled images of objects across 38 categories. Yet, those categorisations remain unclear. It is neither known which categories were used, nor which criteria are required to constitute a threat for this trained data model.

Chapter Summaries

1. Introduction: Outlines the scope of the thesis, focusing on how counterfactual and hypothetical narratives shape the discourse and justification for deploying data-based military technologies.

2. Practices of decision making: Establishes a theoretical framework for analyzing decision-making as a cultural and social process, emphasizing how narratives are used to manage contingency and doubt.

3. Historic Context: Briefly examines the historical conditions, particularly the "Global War on Terror," that necessitated the shift toward data-driven intelligence and drone warfare.

4. Terminology: Addresses the definitional challenges surrounding "Lethal Autonomous Weapon Systems" (LAWS) and justifies the use of the term "data-based military technology."

5. 1st Counterfactual: fewer civilian casualties: Analyzes the narrative claim that data-based systems reduce collateral damage, contrasting this with Suchman’s critique of the "Precision Shift."

6. 2nd Counterfactual: Safeguard and policy: Investigates the vague nature of policy proposals like "Meaningful Human Control" and how these reflect colonial-era attitudes toward governance.

7. 3rd counterfactual: No defiance: Explores how automated command structures aim to eliminate human agency and potential soldiers' refusal to follow orders, drawing parallels to historical "Air Power."

8. Transition to Hypotheticals: Provides a bridge between historical counterfactuals and prospective hypothetical scenarios that drive future military investments.

9. 1st Hypothetical: adversarial AI on the battlefield: Discusses the fear of being outpaced by adversarial AI and how this fear justifies sustained military investment.

10. 2nd Hypothetical: long-term commitment: Examines how initial investments create a "lock-in" effect where subsequent reliance on unproven technology becomes a strategic necessity.

11. Conclusion: Summarizes how neither counterfactual nor hypothetical narratives adequately address the deep-rooted technical and ethical challenges of autonomous military systems.

Keywords

Data-based military technology, Artificial Intelligence, decision-making, counterfactual narratives, hypothetical scenarios, US military, command and control, situational awareness, Lethal Autonomous Weapon Systems, Meaningful Human Control, Global War on Terror, asymmetric warfare, algorithmic warfare, colonial traditions, military ethics.

Frequently Asked Questions

What is the central focus of this bachelor's thesis?

This work investigates the narratives used by the US military to justify the deployment of data-based technologies and artificial intelligence in modern warfare.

What are the primary thematic fields covered in this study?

The work explores military decision-making, the history of drone warfare, ethical implications of automation, the influence of colonial power dynamics, and the geopolitical pressures behind AI investment.

What is the primary research objective?

The goal is to analyze how counterfactual and hypothetical narratives function as sense-making mechanisms to legitimize technological military expansion despite existing criticism and significant technical shortcomings.

Which scientific methodology is applied?

The author utilizes a cultural-scientific analysis of decision-making, focusing on decisions as social processes rather than outcomes, while incorporating qualitative criticism from sociological and historical perspectives.

What topics are discussed within the main body of the text?

The main body examines three main counterfactuals (precision, policy safeguards, and defiance) and two hypothetical scenarios (adversarial AI threat and long-term investment commitments) to assess the "procedural rationality" of these military choices.

Which keywords define this academic work?

Key terms include data-based military technology, Artificial Intelligence, decision-contingency, algorithmic warfare, colonialist traditions, and OODA loop strategy.

How does the author interpret the role of "Meaningful Human Control"?

The author argues that "Meaningful Human Control" is an epistemologically narrow and vague policy concept that fails to address the power dynamics between the "controller" and the "controlled," effectively excluding the perspectives of those most vulnerable to these weapons.

What is the significance of the "OODA loop" in this study?

The OODA loop serves as the standardized command procedure for kinetic engagement; the author analyzes how data-based technology is intended to accelerate this loop, often at the risk of human error and ethical failure.

What implications does the thesis suggest for the future of warfare?

The study suggests that the shift toward autonomous and data-driven systems creates a "tense and volatile" environment where AI-driven deterrence could potentially lead to unforeseen arms races and the further erosion of the "civilian" as a protected concept.

Ende der Leseprobe aus 67 Seiten  - nach oben

Details

Titel
AI Killing in the Name of Past and Future. Identifying Counterfactual and Hypothetical Narratives Justifying the Decision to Deploy Data-Based Military Technology by the US Military
Hochschule
Europa-Universität Viadrina Frankfurt (Oder)  (Kulturwissenschaften)
Note
1,3
Autor
Timotheus Meiß (Autor:in)
Erscheinungsjahr
2024
Seiten
67
Katalognummer
V1561299
ISBN (eBook)
9783389111611
ISBN (Buch)
9783389111628
Sprache
Englisch
Schlagworte
Artificial Intelligence Artificial general Intelligence data-based technology Military History Global War on Terror decision-making Battlefield Contingence Lethal Autonomous Weapon Systems LAWS Counterfactuals Hypotheticals Narratives Counterinsurgency UAV Unmanned Aerial Vehicles Drone warfare Machine Learning speed of engagement surveillance infrastructure reconnaissance civilian casualties reinvention of accuracy project maven image recognition Network centric warfare Dual use OODA Loop Killer Robots Human Meaningful Control Human in the Loop Human on the Loop Human out of the Loop decorporealization Command and Control C2 Structures International Human Law bomber pilot dronization Air Power Black Box Adversarial AI JADC2 Joint All Domain Command and Control Arms Race ChatGPT Large Language model LLM Algorithms System destruction warfare AI ethics ban lethal autonomous weapon systems Counterfactual Thinking algorithmic warfare decision-making under radical uncertainty precision weapons epistemology meaningful human
Produktsicherheit
GRIN Publishing GmbH
Arbeit zitieren
Timotheus Meiß (Autor:in), 2024, AI Killing in the Name of Past and Future. Identifying Counterfactual and Hypothetical Narratives Justifying the Decision to Deploy Data-Based Military Technology by the US Military, München, GRIN Verlag, https://www.grin.com/document/1561299
Blick ins Buch
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
Leseprobe aus  67  Seiten
Grin logo
  • Grin.com
  • Versand
  • Kontakt
  • Datenschutz
  • AGB
  • Impressum