Unmanned Aerial Vehicles (UAV) have been around for almost a century and are used these days by many armed forces in war zones all around the globe. Lately, we’ve seen a massive emerge in technology like Artificial Intelligence and Object Detection that allows a transition to Lethal Automated Weapon Systems (LAWS). LAWS no longer require a human to decide who gets to live and who has to die. It's now the technology, that judges. The questions assigned to this new technology are multidimensional and pose a novel challenge to any legal and ethical frameworks in place these days. This paper will be focusing on discussing ways in which the rise of those killer robots affects the ethics of warfare in the future. In addition, it will provide a state of the art, on where the technology is right now and how it’s likely going to evolve over the years.
Last week IBM researchers stated that they’ve for the first time found a way to train Artificial Intelligence (often also referred to as AI), in such a way that it stays conformant with ethical and behavioral guidelines (Dickson, 2018). Even if it might sound like something irrelevant, this actually marks a tremendous step, as it’s been the first time that this has been achieved without defining explicit rule sets. These days AI -which is a key component of autonomous systems- can be found in various everyday objects such as cell phones, digital assistants and cars. Though, the Pandora’s box has long been opened. The Stockholm International Peace Research Institute states that autonomy is already a reality in today’s weapon system development (Boulanin & Verbruggem, n.d., p. 54). Projects like the United States Army’s Future Combat Systems Project are no longer restricting the use of autonomous systems to just personal use, but also try to deploy artificially intelligent robots into war (Sparrow, 2007). These killer robots or more precisely called Lethal Automated Weapon Systems are capable of identifying targets and executing an attack without any human intervention. The steady introduction of these artificially intelligent killer robots raises a variety of new questions concerning the ethicality of autonomous weapons in modern warfare. This paper will discuss how killer robots are affecting the ethics of modern warfare.
The first Unmanned Aerial Vehicle in today’s sense has been first mentioned in the 1915 with Nicola Tesla’s dissertation in which he introduced the concept of unmanned, armed, aircraft designed to defend the United States (Dempsey, 2010, p. 1). Only due to the massive improvement of microcomputers and new satellite-based networks modern drones like MQ-1 Predator or its European pedant the EADS Harfang had been possible.
If we have a look at the UAV market and the estimated worldwide production of military drones, well notice that there will soon be many more drones crisscrossing the sky above of the current war zones like Pakistan, Yemen and Somalia (Applegate, 2013). Figure 1 shows the estimated worldwide production volume for unmanned aerial vehicles for military use. The expected growth is pretty significant, as the estimated production volume will more than double in 2019 compared to the one in 2016.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1 Estimated worldwide production volume of military drones (Teal Group Corporation, 2013)
Something that is of particular interest for the research of this paper is, that the production of unmanned combat aerial vehicle (UCAV) like the General Atomics Avenger or EADS Barracuda drones will reach a production of 25 by 2022 already (Teal Group Corporation, 2013, p. 2). While this might seem like a diminishing small amount compared to the vast number of Mini-UAVs, it’s important to know that these are the drones capable of carrying explosives that allow eliminating an identified target. It’s the same category of UAVs that in the near future could be identifying targets and executing an attack without any human intervention.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2 Number of confirmed deaths by drones in Pakistan, Yemen and Somalia (The White House, 2016)
Another fact that indicates the popularity of drones is the number of confirmed kills throughout the recent years. Looking at the number of confirmed kills one can clearly see that killing by drones is no longer a science fiction scenario. Indeed, the New America Foundation assumes that over 2400 people had been killed yet alone by US drones in Pakistan and Yemen within just 5 years of the Obama Administration.
In the same way, the White House also states that between 2372 and 2581 life’s had been erased by a total number of 473 air strikes especially target to condemn terrorism in the middle east (The White House, 2016).
The transition from the remotely human controlled drones, tanks and gun systems to LAWS is already in place and it is only a matter of time. It’s not a matter of if it’s going to happen; it’s rather a question of when it’s going to happen. The technology barriers have long been taken down, as similar autonomous systems have already been deployed in limited environments (United Nations & Office for Disarmament Affairs, 2017, p. V). According to a defense contractor, BAE Systems’s Teranis drone -which is currently in development- will have the capacity to attack targets of its own accord (Dean, 2016).
Ethics and Implications for Human Rights
If we look back into history when gunpowder, rifled muskets and breech loaders and later nuclear bombs had been introduced as part of the second revolution a lot of ethical questions had been discussed. The latter is of particular interest, as they basically allowed to wipe out the world in the most efficient manner possible similarly to what LAWS will also allow one day. The moral and ethical question that had been discussed with the introduction of the first and second generation of warfare are shockingly correlated to those question arising again from the dawn of fully automated killer robots.
To get a better understanding in which way killer robots are affecting the ethics of modern warfare, it’s necessary to understand what ethics in terms of warfare even mean. One way of ethically assessing war is by using the ‘Just War Theory’, a doctrine that tries to ensure that war is -under certain conditions- morally justifiable by listing a set of criteria which have to be met in order for a war to be ethical. The doctrine also makes a rather significant distinction in criteria going to war (Jus ad bellum) and conducting war (Jus in bello).
- Arbeit zitieren
- Alexander Bilz (Autor), 2017, Brothers in Arms. How is the rise of Lethal Automated Weapon Systems affecting the Ethics of modern warfare?, München, GRIN Verlag, https://www.grin.com/document/463913