Table of Contents
2. State of the Art
2.1. Artificial Intelligence
2.2. Governing Emerging Technologies
2.3. Imagining Emerging Technologies
3. Research Question
4. Methodological Considerations
4.1. Structuring the Material
4.2. Document Analysis
5. Artificial Intelligence and the United States
5.1. Preparing for the Future of Artificial Intelligence
5.1.1. Technology and Innovation
5.1.2. Benefits and Risks
5.1.3. Governance and Citizens
5.2. National Artificial Intelligence Research and Development Strategic Plan
5.2.1. Technology and Innovation
5.2.2. Benefits and Risks
5.2.3. Governance and Citizens
5.3. Artificial Intelligence, Automation , and the Economy
5.3.1. Technology and Innovation
5.3.2. Benefits and Risks
5.3.3. Governance and Citizens
6. Artificial Intelligence and the European Union
6.1. Artificial Intelligence for Europe
6.1.1. Technology and Innovation
6.1.2. Benefits and Risks
6.1.3. Governance and Citizens
6.2. Coordinated Plan on Artificial Intelligence
6.2.1. Technology and Innovation
6.2.2. Benefits and Risks
6.2.3. Governance and Citizens
6.3. Ethics Guidelines for Trustworthy AI
6.3.1. Technology and Innovation
6.3.2. Benefits and Risks
6.3.3. Governance and Citizens
7. American and European Visions of Artificial Intelligence
7.1.1. Sustaining and Competing
7.1.2. Risking and Balancing
7.1.3. Avoiding and Managing
10. List of Abbreviations
11. List of Figures
12.1. Abstract English
12.2. Abstract German
What some call a second industrial revolution has propelled our society into a knowledge or information society that increasingly relies on computational algorithms to make decisions on the ever-expanding sets of data that are available in the age of digitization (Jasanoff, 2016). At the center of this development is a new wave of artificially intelligent (AI) software that allows ICT (Information and Communication Technology) an avenue to influence the way we interact, learn and make decisions both at a societal and individual level. Connected to this new technology are a myriad of utopian and dystopian hopes and fears that Pamela MacCorduck (2004, p. 381) poignantly describes as a desire for “forging the gods” and that provide the material for vibrant imaginations and expectations which elevate AI to one of the defining technologies of the coming decades.
As with other emerging technologies like nanotechnology and biotechnology (Burri, 2015; Jasanoff, 2005; Macoubrie, 2006) regulators are setting out to build frameworks which are intended to utilize the opportunities and address the concerns of this technology. However, there is no universal response and emerging technologies have been framed in diverse ways in different political cultures with various attitudes towards regulating innovation (Jasanoff, 2005). AI is a new frontier in which these attitudes can be practiced, reinvented and negotiated.
AI is central to contemporary visons for the future (Bostrom, 2014). These visions are often articulated in documents regarding the governance of technological innovation and can shine light on attitudes toward potentially risky, emergent technologies, which indicate the respective social, political and technological cultures. The United States and Europe are important in the process of setting the global agenda of this technology and both have released a series of documents that outline their respective approaches (NSTC 2016a, 2016b, 2016c; EC 2018a, 2018b; AI HLEG 2018). To utilize this opportunity this thesis will explore how governments in the United States and Europe imagine the social changes connected to the technological emergence of Artificial Intelligence.
In order to shine light on the political cultures technological emergence is embedded in, the following chapters will explore the dominant visions, images and ideals associated with the technological innovation in AI as well as the expectations of benefits and risks projected onto AI in the policy document from both the EU and U.S.. Furthermore, the imagined role of governance as well as society´s envisioned part in the assessment, development and regulation of AI technology are explored to underline similarities and differences of policy cultures and their approach to emerging technology.
2. State of the Art
Recent advances made in the context of various forms of machine learning1 (ML) are often grouped together under the term AI. Even though AI emerged out of the natural and computer sciences, it also has longstanding roots in the social sciences, which have put their tools to use in order to uncover the entanglements of the social and the digital world. This section will explore the foundation of this discussion and consider the contemporary research which provides the context for this research project.
2.1. Artificial Intelligence
Technology from the Greek techne (skill) and logos (study of) originally was used to describe the study of skilled craft and only was connected to objects in the last century (Jasanoff, 2016). In contrast, today, technology often gives rise to images of computers, phones and other silicon-based electronics. This semantic change is surely connected to the impact that ICT’s have had on our everyday life. Over the last decades computational power increased tremendously, making things possible that were unimaginable half a century earlier. However, these strides were made possible due to rule based software algorithms that could out-calculate the best chess players in the world by many orders of magnitude, but needed to be fed with vast libraries of rules made and coded by humans (Bostrom, 2014). The late nineties, early two-thousands saw a paradigm shift in the way sophisticated software is created, by the practical use of neural networks2. These networks – a collection of interconnected small units loosely comparable to neurons of the human brain3 – have the crucial difference that they learn by being trained rather than explicitly programmed (Peter Stone et.al., 2016). Such systems crawl through training data sets and learn to recognize patterns which allows for up until now unprecedented capabilities like recognizing images, utilizing language and driving cars. In practice, these systems form a crucial part of today’s cutting-edge technologies. From digital assistants like Siri to social networks and search engines, all use some form of machine learning in their service (Peter Stone et.al., 2016).
Harry Collins´ (1990) book called Artificial Experts is a seminal STS contribution on AI in which he explores the relationship between people, machines and the difference between human action and machine intelligence. Collins argues that the knowledge of a human community can never be fully codified, as machines can never translate the full spectrum of poetry. Artificial Experts looks at chess computers or pocket calculators and diagrams as a typology for a simple expert system. This work is extremely valuable for giving perspective on technological change, as the fascination with pocket calculators seems almost uncanny from a contemporary standpoint. Nonetheless some of the limitations of computational machines outlined, still hold true today. Collins´ work shows how codifying artificial reasoning can provoke new thinking about how the biological experts themselves attain, enact and translate knowledge. He demonstrates the importance of context in knowledge production, thereby paving the way for the social study of machine intelligence. As the imaginaries surrounding AI are heavily connected to specific visions of the future, this perspective shows the importance of focusing not just on the technical details but rather on the social contexts surrounding the technology.
Other social research focused particularly on the algorithmic nature of this technology. Algorithms are said to have the capacity to “shape social and cultural formations and impact directly on individual lives’’ (Beer, 2009, p. 994). Dalton (2012, p. 30) reiterates the importance of algorithmic thinking, which have been crucial to much of the thought of the preceding decades: In the models of game theory, decision theory, artificial intelligence, and military strategy, the algorithmic rules of rationality replaced the self-critical judgments of reason. The reverberations of this shift from reason to rationality still echo in contemporary debates over human nature, planning and policy, and, especially, the direction of the human sciences.
What algorithms actually are or what the term means however is often not clear. For software engineers, it often simply refers to “the logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome” (Gillespie, 2016a, p. 19). Today’s algorithms, particularly connected to machine learning, go beyond aggregating human-assigned values and are trained on a corpus of data, adding another step in the creation process. Gillespie (2016a, p. 20-21) argues that “the values and assumptions that go into the selection and preparation of these training data may be of much more importance to our sociological concerns than the algorithm that’s learning from them”. This notion lures social scientists away from a more technical meaning and starts to unravel the complex social activity and values that are translated into these algorithms or in the case of training data, in the selection process on what the machine is trained on. The social judgement of what is relevant gets molded into measurable relationships and actionable indicators that produce the algorithmic output. This process tends to erase the people building the systems, creates black boxes that render the social complexity opaque or invisible and serve as a tool to create distance in terms of accountability (Gillespie, 2016a).
Through this opaque social process algorithms are often connected to rationality and objectivity. This problematic assumption of value-free computation is explored by Diana Forsythe’s (1993a, 1993b) ethnographic account of knowledge-making in an AI scientific community. She advocates that technological tools embody the values and assumptions of their builders. Describing various knowledge-related problems that affect AI systems when they interact with the real world, Forsythe uncovers that such problems are not purely technical but rather the result of nontechnical aspects, in particular the tacit assumptions the practitioners encode into their systems. This account makes it evident that algorithms and AI systems cannot be understood as cold, objective expert systems but rather a set of complex and interwoven interactions between a machine and the environment it is built in.
The fact that algorithmic systems often encounter problems in the real world is reflected in research looking into algorithms that govern many aspects of social life which can amplify existing social prejudices. Biases can emerge in the context of use and learning algorithms are susceptible to technical shortcomings (Friedman & Nissenbaum, 1996). Computational systems are not neutral, but rather have embedded values (Nissenbaum, 2001) that can lead to unintended discrimination, by creating the illusion of objectiveness (Garcia, 2016; Osoba & Welser, 2017; Rosenblat, Kneese, & Boyd, 2014). Data and training sets are designed by human beings; they are always products of the web of social interconnections from which they emerge. Only through interpretation we give the output of these algorithms voice and meaning (Crawford, 2013). Conversely, algorithms can become a meaningful element of the public itself by moving culture, through mechanisms of distribution and valuation as “part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture” (Gillespie, 2016b). When ICT’s are so deeply ingrained in our social web, they have the power to wipe out billions in the global market. This is exemplified by the tweet that read ‘Two Explosions in the White House and Barack Obama is injured’ resulting in a 136 billion dollar loss in the Standard & Poor’s 500 Index due to algorithmic trading software misreading the false tweet, resulting in a chain reaction sell-off (Karppi & Crawford, 2016). Often powerful algorithms work in places of public discourse, particularly through various forms of online networks and news aggregators, where incompatible perspectives coexist. However, algorithms are frequently designed to produce clear winners, often with little accountability. This makes algorithmic machines, which are often portrayed as agnostic logic-makers, inherently political. This complexity often is being shrouded in a black-box inaccessible to those subject to the algorithmic results (Crawford, 2016). The wide spread use of AI in combination with its many useful applications makes it an important area of interest, particularly as unintended consequences and tacit assumptions built into the systems have been a cause for concern. With policy makers increasingly worried ‘‘that individual autonomy is lost in an impenetrable set of algorithms’’ (Executive Office of the President, 2014, p.10), it becomes therefore important to explore how the social and political sphere interface and interact with these technologies.
2.2. Governing Emerging Technologies
The life of the last century has seen many changes compared to that of a roman soldier nervously observing Germanic tribespeople washing their cloths in the Rhine or that of a feudal peasant working the wheat fields of his local lord. The daily life of these people might have been technologically very similar to that of many of their ancestors’ generations earlier. In contrast, the life of a soldier at the beginning of the first world war encountering a cavalry attack, was meaningfully different to that of the same soldier facing early tanks at the end of the same conflict (Howard, 1992). This rapidly changing environment has not stabilized and in a modern society, there is technological change at a pace that is different in nature than in premodern times. Ullrich Beck (1992; 2009) argues that this change produces new forms of risks that constantly require us to respond and adjust to these changes. Today’s society faces not just natural disasters or dangers from other people, advances in technology as well as economic growth and globalization have led to new serious hazards like climate change but also changes to social structures. These new risks are being layered on top of already existing ones. Society is reconfiguring itself in order to deal with the new risks, thus creating a reflexive risk society. Reflexive modernity involves the management of risky circumstances under conditions of such complexity that outcomes cannot be foreseen. Beck argues that “[n]ature can no longer be understood outside of society, or society outside of nature” (1992, p. 80). This opens the room for a scientization of politics (Weingart, 1999) as with the “societalized nature, the natural and engineering sciences have become a branch office of politics, ethics, business and judicial practice in the garb of numbers, despite the external preservation of all their objectivity” (Beck, 1992, p. 82).
Science and emerging technologies are deeply interdependent with science enabling new innovations and vice versa. The role society is attributed in this process has shifted over time and the governance of science has seen a move from risk to innovation governance. Classically, science has been conceptualized through the Mertonian (Merton, 1973) norms of universalism, communism, disinterestedness and skepticism. In this context scientists are thought to hold themselves to an ethos that is institutionalized and reproduced, thereby binding individual scientists to these sets of norms. Science is envisioned as a self-governing system with a separation of the epistemic core of scientific knowledge production from society. This notion has been thoroughly deconstructed since the 1970s but these norms have still remained part of a scientific identity as a myth and ideal to strive for (Felt & Fochler, 2008).
A core concept recognizing and acknowledging the entanglement of science and society is that of mode 2 science (Nowotny, Scott, & Gibbons, 2001). The authors argue that through co-evolution both science and society become increasingly interdependent and science is progressively entered by societal actors like patient organizations (Rabeharisoa & Callon, 2004) and societal rationales with the science system gradually changing from a segregated model to an integrated model of internal organization. As visions of economic applicability gain increasing foothold in the core of science the challenge is to not only “consider the societal impacts the knowledge produced, but even reflexively take into account the influence society has on the production of knowledge” (Felt & Fochler, 2008, p. 4).
Integrating the public often is connected to a traditional public understanding of science (Bodmer, 1986), in which the public is thought to have knowledge deficits that prevents them from understanding the real upsides of technoscientific developments outlined by experts. This deficit model (Wynne, 1982) is based upon a belief in the public’s lack of knowledge to which the solution often is seen in merely providing more correct information to educate the public in order for it to understand the issue as it is understood by experts. This notion has been shown to be false and simplistic with the seminal example being sheep farmers in the United Kingdom having more accurate knowledge of radiation in the region that the scientific experts (Wynne, 1992). In the recent decades the deficit model has fallen out of favor and has been replaced by new buzzwords like public engagement with science or public participation emphasizing a culture of consultation and dialogue (Felt et al., 2009; Irwin, 2006). These wordings are intended to suggest that citizens are more actively involved in the policy process to increase trust in science and expertise even though it often remains vague who and how society is actually represented (Felt et al., 2009).
In the context of public concerns and in combination with the discursive shift towards more inclusive vocabulary, governance institutions have increasingly responded with programs like the Ethical, Legal, and Social Implications (ELSI) program in the United States. The Human Genome Project was the first major scientific endeavor which included a reflection on the ethical and social dimensions regarding the research done in the project (Hilgartner et al., 2016). It initiated the ELSI movement with the goal of helping “social policies about science evolve in a well-informed way” (National Institutes of Health, 1993, p. 48). The European counterpart which uses ‘Aspects’ instead of ‘Implications’ (ELSA) in order to circumvent the deterministic connotation of ‘Implications’ (Hilgartner et al., 2016). ELS in various forms still is part of research platforms with hundreds of millions in research funding (Hilgartner et al., 2016). Such initiatives come under various labels including Responsible Research and Innovation (RRI) (Stilgoe & Guston, 2016). RRI, particularly widespread in Europe can be seen as an alternative to ELSA that in part reflects the “recognition of the limitations of extant policy approaches to managing ethically problematic innovations such as GMOs” (Stilgoe & Guston, 2016, p.1). Other approaches include real-time technology assessment (Guston & Sarewitz, 2002) and anticipatory governance (Guston, 2014), which attempt to integrate innovation and society in mutually beneficial ways by emphasizing early-stage anticipatory work in combination with an intensification of democratic engagement which focuses on involving publics in early-stage deliberations. Such approaches are developments that reject the initial focus on impacts that where characteristic for the U.S. ELSI program. In fact, critical scholars have argued that social science and ethical research has the tendency to be introduced too far downstream in the innovation process (Wilsdon & Willis, 2004). Particularly by assessing risks linked to the implementation of technology, public engagement is set downstream with the bulk of institutional commitments which set the technoscientific ‘ball rolling’ already in place, resulting in a certain momentum that is difficult to change. The window for more fundamental questions has already closed. Therefor ELS related models can be more productive if they are not set at the final implementation stages of the innovation process, and scholars have argued that critical reflection and public interaction should move upstream for example by engaging with laypeople in discussions on the ethical questions of certain new technologies (Felt & Fochler, 2008; Felt et al., 2009). While upstream engagement should be welcomed, what is actually meant is often highly ambiguous. As Felt and Fochler (2008, p. 489) note “the meaning of participation is mostly defined top-down”. They find that participatory approaches are not always welcomed by the public itself or only on a very abstract level. This confirms that inclusion of publics is not just a simple pretext for modern policymaking and shows the complexity of meaningful public participation and democratic decision-making.
The various forms of ELS models, initially connected to genetics, have dispersed into a wide variety of emerging technologies such as nanotechnology (Roco et al., 2007) or synthetic biology (Calvert & Martin, 2009), representing a push to deploy ethics programs as a new tool for the governance of emerging technology (Hilgartner et al., 2016). While there have been reflections on the deterministic aspects of the initial ELSI program, Marris (2015) points out that ELS programs often remain focused on managing public concern rather than concentrating on sociotechnical ramifications of technology. Ethics is used as a policy instrument that influences and appeases public opinion. It constitutes a valuable resource in creating authority and legitimacy in the democratic governance of science and technology by providing an alternative written law, exercising power in a more indirect manner. For example critical scholars have argued ELS programs are the handmaiden of genomics (Zwart & Nelis, 2009). Moreover, ELS initiatives were criticized for their views on the nature of science and technology, such as the fact/value distinction (Hilgartner et al., 2016). The ELS discourse often frames technological change as necessarily beneficial (Hilgartner, 2008) and assumes neutrality of science and technology (Hilgartner et al., 2016) which has long been shown by STS scholars to be rather embedded in normative and political choices rather than being neutral. These aspects make ELS a clear, linear narrative that leaves out nuance but is easy to utilize as a mode of governance which is able to “reframe and thereby help ‘close’ controversies, for instance, by recasting uncertainties as matters for personal or organizational reflection rather than formal regulation” (Hilgartner et al., 2016, p, 834).
Felt et al. (2007) describe how European policy making on science and technology has the tendency to suppress the expression of normative questions, political values and democratic ambitions. The authors argue that policy discussions tend to both simplify and exaggerate the role of science in risk assessment and obscure that risk science itself tacitly shaped by certain interests, assumptions and social values. Normative discourses are seen as explicitly scientific and articulated in terms of appropriate regulation of science through ethics. Felt et al. (2007, p. 43) observe that these normative discourses can have legal or quasi-legal functions and are part of a larger shift away from explicit regulation towards soft non-legally binding instruments “such as codes of practice, fiscal incentives, audit and reporting measures—in short, by the shift from legislatively authorized government to administratively implemented governance”.
These politics of ethics are part of a global shift to ELS modes of dealing with emerging technologies, however, this specific expression is advertised as a “distinctive element around which the political European community can and should be built” (Felt et al., 2007, p. 79). The politics of ethics de-politicizes technoscientific issues and is a self-legitimating way to serve the same functions as politics by neutralizing political issues through the introduction of norms outside the more complicated and rigid process of law-making. Such measures tend to evoke society without involving it and only to pay lip service to democratic concerns while in reality commonly shared European ethics are merely a product of expert deliberations (Felt et al., 2007). Criticism and public debate are pushed to the sidelines when ethics is represented as only a matter of expert judgment. A common justification is that conventual law-making is found to be too slow and inflexible or responsive to encounter the dynamism of modern technoscientific innovations. Particularly in the EU the narrative of competitiveness as a state of emergency is the rationale to override democratic principles that are perceived to be more sluggish (Felt et al., 2007).
Ethitization of governance is a convenient tool for the governance of emerging technologies that happens, not only in the EU, but across various incarnations of ELS-type policymaking. It involves ethics in the process of emerging technologies and gives society space on technoscientific issues. However, this on the surface desirable policy tool has to be viewed with caution as it often only narrowly involves democratic participation or even serves as a tool to remove a democratic involvement under the guise of shared values or expert discovered ‘objective’ truths. If at all, democratic input is often only valid downstream, after ‘factual truths’ have already been revealed.
2.3. Imagining Emerging Technologies
ELS like initiatives can be a crucial place where the co-production of knowledge and social order can take shape as “the ways in which we know and represent the world (both nature and society) are inseparable from the ways in which we choose to live in it” (Jasanoff, 2004, p. 2). Broadly the idea of co-production in the context of emerging technologies can be understood as “our inventions change the world, and the reinvented world changes us” (Jasanoff, 2016, p. 1). Jasanoff (2004, p. 38) suggests that the idiom of co-production is especially valuable at times of “emergence and stabilization of new technoscientific objects and framings” as well as the “adjustment of science’s cultural practices in response to the contexts in which science is done”. Moments of co-production can be a window into how new technologies are recognized and how meaning is assigned to them or how they are endowed with legitimacy and meaning which is particularly relevant in the context of policy papers addressing the emergence of AI.
Following Jasanoff (2004) co-production, also occurs along four pathways: “making identities, making institutions, making discourses, and making representations” (Jasanoff, 2004, p. 38). These pathways can guide us to look at identities being constructed and follow institutions in their effort to maintain and establish credibility in order to stabilize what is known and how it is known. AI, seen as an important cornerstone for technological and societal development is a site where these identities can be reinforced or constructed as collective identities are a powerful resource for establishing a sense of order in an environment that is confronted with the emergence of a novel technological development. Jasanoff (2004, p. 9) describes this vividly: “when the world one knows is in disarray, redefining identities is a way of putting things back into familiar places”.
Institutions, such as the ones that published the documents discussed in this thesis, play a crucial part in the worldmaking and identity construction. They utilize instruments such as policy papers for ordering knowledge and ordering society to create stability in times of uncertainty. Jasanoff (2004, p. 40) describes them as society’s inscription devices, “vehicles through which the validity of new knowledge can be accredited, the safety of new technological systems acknowledged, and accepted rules of behavior written into the as-yet-unordered domains that have become accessible through knowledge-making”. Institutions also serve as sites for the reaffirmation of political culture and can serve as a comparative point of reference to highlight differences in regulatory practices. Simultaneously, “discursive choices also form an important element in most institutional efforts to shore up new structures of scientific authority” (Jasanoff, 2004, p. 41) as creating order is closely connected to ways of describing novel phenomena and persuading audiences. The emergence of AI represents a moment of co-production where institutions can express their worldmaking and sensemaking through policy papers that reaffirm or redefine their respective political culture and create a sense of order in the face of an uncertain future.
The idiom of co-production helps us to understand how things fit together, however, to explain how things come to be as they are, it is valuable to complement the framework of co-production with conceptual tools that help us structure our responses to the world (Jasanoff, 2015a). Like hammers and drills, they help to work on data, ordering information to make hidden interrelations visible. One such tool is the concept of sociotechnical imaginaries developed by Jasanoff and Kim (2009), which allows the study of complex entanglements of scientific and technological changes with other dimensions of social life. Jasanoff (2015, p. 4), understands sociotechnical imaginaries as collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology.
Visions of science and technology almost always have images about the social consequences, dangers, benefits and the collective good connected to them. Imagined futures direct or inform investments, policies, and public perception. They are deeply embedded in the research and innovation process. Therefore, sociotechnical imaginaries can help to understand why some ideas are co-produced while others stay dormant (Jasanoff, 2015).
Science, technology and innovation are deeply intertwined in the production of collective visions of good and attainable futures as imaginaries have the power to shape technological design and channel research funding or rationalize the inclusion or exclusion of citizens and their place in the narrative of technological progress. AI is deeply entangled with desirable futures and imaginations expressed in policy papers represent a mode of ordering the uncertainty of technological emergence to make such desirable futures are attainable. Inversely, AI is also linked to “fears of harms that might be incurred through invention and innovation, or of course the failure to innovate” (Jasanoff, 2015, p. 5). AI is the playground of an intricate dance between positive and negative imaginations of utopias and dystopias that create a sense of direction and order by painting a picture that serves as a guiding light of benefits worth striving for and fears that are better circumvented. Such imagined futures help to align investments and reaffirm or recreate the role of the state in the stewardship of public good. Governance institutions are important sites for the construction and implementation of sociotechnical imaginaries which serve as a goalpost for creating direction in policy making and as instruments of legitimation (Jasanoff, 2015).
Hilgartner et al. (2016) note that with its focus on institutions and identities, sociotechnical imaginaries and co-productionist thinking offer a valuable avenue to study epistemic and normative arrangements that simultaneously make knowledge and order in the context of emerging technologies. Digital innovation is a very contemporary site that creates disorder and attracts the creation of images that in turn produce order. For example, Mager (2016) shows how the sociotechnical imaginaries of search engines inform and shape the public policy of the European Union and construct a European identity in the envisioned politics of control. Beyond this, comparative research has offered a fruitful method for studying imaginaries exemplified by Sheila Jasanoff and Sang-Hyun Kim’s (2009) exploration of nuclear power in the U.S. and South Korea. While both countries have imagined nuclear power and nationhood together, U.S. presented itself as a responsible regulator for a potentially runaway technology which need responsible containment to be kept under control. South Korea conceptualized nuclear power in terms of development and as a symbol of power. The different imaginaries have evolved into diverging power-plant designs but more importantly are connected to differing responses to nuclear shocks, risk assessment and radioactive waste management.
Such differences in attitudes and policies are deeply intertwined with civic epistemologies, the “institutionalized practices by which members of a given society test knowledge claims used as a basis for making collective choices”(Jasanoff, 2005, p. 255). These culturally specific ways through which publics imagine governmental knowledge and reasoning to be produced or molded into decisions have been explored in Jasanoff’s (2005) analysis of biotechnology regulation in Germany, the United Kingdom, and the United States. Her analysis shines light on the ways different nations approach ethical questions and risk by highlighting how institutional structures, regulatory pathways and internal logics develop along nationally distinct cultural attitudes. Some regulatory approaches categorize biotechnology as manageable, while others classify it as novel and perceive it as risky. Such differences help to understand diverging levels of acceptance of genetically modified food which remains much more controversial in the United Kingdom than in the U.S. or Germany. Such differing modes of public reasoning challenge a common world view and highlight differences like the illegal embryonic stem cell research in Germany which is comparatively uncontested in the United Kingdom. Jasanoff identifies such differing civic epistemologies that guide how governance is imagined and enacted like the grounding of authority in impersonal objectivity in the U.S. contrasting specific trusted individuals in the United Kingdom. Ethics backed by the idea of scientific objectivity can be a center of co-production where experts imbue their judgement with scientific authority to render their position irrefutable and decide which forms of reasoning are allowed to legitimately address the democratic governance of technology (Hurlbut, 2015). This reasoning can serve as a way to relive innovators from responsibility by legitimizing frameworks that automatically render innovation as ethical and thereby unchallengeable (Hilgartner et al., 2016).
GMO policy has been framed largely in favor of innovation, emphasizing the positive impacts of technology (Daele, 2007). The public however is also concerned with institutional contexts that have relied predominantly on risk assessment and have taken the upsides of GMO technology for granted (Mayer & Stirling, 2004). Indeed, Motta (2014, p. 1363) concludes that “public controversies on GM crops show that these do not result from the lay perception of risks and that expert advice does not solve tensions”. Furthermore, as Vogel (2012, p. 292) notes, “the public typically does not view risks in isolation; rather, it links them to other risks about which it has heard”. For example, regulatory failures such as the mad cow disease controversy have undermined the public’s trust particularly in European regulatory institutions (Jasanoff, 1997a, 2005; Vogel, 2012).
Sociotechnical imaginaries play a crucial role in such dynamics, by framing the ways societies assess and govern technology. Not only within biotechnology, but also in other emerging technologies like nanotechnology (Burri, 2015; Macoubrie, 2006), sociotechnical imaginaries create expectations about future technical and social structures of order that shape spaces of possibility. In fact, the expectations surrounding nanotechnology reflected extreme positive expert visions, imagining “almost limitless possibilities enabled by advances at the nanoscale and its convergence with modern biology, the digital revolution and the cognitive sciences” (Kearnes & Macnaghten, 2006, p. 281). The public perception of nanotechnology, however, is not homogenic and Gaskell et al. (2004) found that the U.S. perception was much more positive than that of the European public, which the authors trace to a more cautious tendency in terms of emerging technologies. Some of the skepticism can be connected to previous experiences with controversies like GMO (Jasanoff, 2005) and BSE (Jasanoff, 1997a), but makes the response to nanotechnology particularly interesting is that “the GM experience represents a warning, a cautionary tale of how not to allay public concern. Avoiding nanotechnology becoming the next GMO controversy is seen as critical to the public acceptability of applications in the field” (M. B. Kearnes, Macnaghten, & Wilsdon, 2006).
Following Kurath and Gisler (2009) preceding technological controversies have resulted in a shift towards more democratic engagement. In the U.S. for example there has been some recognition that “ELSI programs lacked any mechanism to affect the innovation process itself” (Macnaghten, Kearnes, & Wynne, 2005, p. 270). Bowman and Hodge (2007) show that public engagement processes in the context of nanotechnology have been incorporated in the United Kingdom, United States, Germany, and Australia. They find, where backlash against GMO was particularly pronounced, like in the United Kingdom, public dialogue has been orientated much more upstream. This upstream public engagement is generally considered desirable and often met with less criticism than engagement that happens downstream after much of the innovation momentum already is in motion. However in a exploration of such upstream initiatives in Europe and the U.S., Rogers-Hayden et al. (2007, p. 129) find that without proper considerations they “might end up re-producing out-dated forms of science communication or being rejected as a failed concept before it has even matured”. Therefore, a balance has to be struck that reflects the complexity of meaningful public participation and democratic decision-making.
Nano and biotechnology have a longstanding history of comparative analysis utilizing co-production and sociotechnical imaginaries as tools to look at the functions and meanings of technology and its relation to society in different countries. Understanding such visions helps to understand what spaces of possibility are opened and which are closed and highlight how publics are envisioned in the context of emerging technologies. Such approaches are also particularly fruitful in scrutinizing the ethitization of governance in emerging technologies. AI is an exciting frontier that combines these elements providing an avenue to deploy the outlined toolsets of concepts to a field that is as novel and controversial as nano- and biotechnology were at their arrival to the global regulatory world. The following chapter will outline a research question that tries to operationalize this open space into a concrete inquiry of national documents addressing the governance of AI.
3. Research Question
AI is often considered to be one of the defining technologies, which will shape the prosperity and power dynamics of the twenty-first century. In the last five years many governments around the world tried to implement policies that are intended to foster AI innovation in order to position themselves as leaders in a space that is thought to be the next global paradigm shift. Europe and the United States are thought leaders in this field and have published policy documents aimed at deliberating the future progress of AI and defining national policies or research strategies, which should serve as signposts for subsequent decisions. Sheila Jasanoff ( 2004) has shown that the ways of knowing the world are firmly entangled with the ways social organizations and members of society seek to control and govern the socio-technical environment they live in. This co-production of technological and scientific knowledge and social identities often is expressed in policy documents that articulate ways of knowing the world and ordering it. The visions expressed in such documents regarding technological innovation reveal attitudes toward potentially risky, emergent technologies, which indicate the respective social, political and technological cultures they are negotiated in, leading to the overarching question how governments in the United States and Europe imagine the social changes connected to the technological emergence of Artificial Intelligence in policy documents?
In order to trace such visions of AI through different regulatory cultures, this paper investigates the meanings and envisioned roles of technology and innovation relative to society envisioned in documents outlining a national AI policy and research strategy. The first sub question is aimed at exploring how the role of technology and innovation in the emergence of AI is imagined. Tackling this line of inquiry will highlight dominant visions, images, and ideals of emerging AI technology and their place in the construction of national identities. This helps to understand how the meanings and functions of technology and innovation are seen through the lens of the different regulatory cultures in the EU and U.S. and explores how technology is conceptualized in relation to the social fabric it is embedded in.
A second sub question probes the heterogeneous set of expectations connected to AI technology by asking how benefits and risks associated with AI are envisioned. Diving into this question can highlight different outlooks on gains, profits, risks and costs projected onto AI in this defining stage of setting the foundations to a policy approach for what is assumed to be an impactful technological change. This line of questioning aids in the recognition of hopes and improvements that are envisioned and how they are weighted against the dangers anticipated to accompany AI highlighting the symmetries and asymmetries with which AI technology is evaluated.
Following the role society is envisioned to inhabit in the organization of a new, emerging technology leads to a third sub question asking how the role of governance and state-society relations are imagined in AI policy documents. Pursuing how perceived risk is governed can help to appreciate differences in regulatory cultures and their approach to balancing the costs and benefits of emerging technology. This is especially relevant in the context of understanding what role the citizens play in the assessment and regulation of AI technology and their envisioned role in the maintenance and nurturing of a social fabric that is expected to change.
This approach is in line with comparative research on emerging technology like Burri (2015), who engages with political cultures in the U.S. and Germany in the context of nanotechnology and analyses documents along similar lines of inquiry. In order to shine more light on the outlined questions, this thesis therefore analyzes three clusters of interest. First, the dominant visions, images and ideals associated with the technological innovation in AI. Second, the expectations of benefits and risks projected onto AI. And third, the imagined role of governance of AI in the policy documents as well as how society is imagined to be part of the assessment, development and regulation.
4. Methodological Considerations
Scientific and technological innovation continually remakes society, which reciprocally shapes, subsidizes, restricts, manages, and redirects innovation. One of the methodologies widely used to investigate this in the context of differing political cultures is to explore cross-national differences in risk assessment and regulatory practice through qualitative document analysis (QDA). This systematic procedure for reviewing and evaluating documents requires the data at hand to be methodically selected, examined and interpreted in order to elicit meaning, gain understanding, and develop empirical knowledge (Bowen, 2009). In accordance to existing research frameworks the following section will develop a systematic approach for finding and handling the data that is needed for answering the guiding questions of this paper.
4.1. Structuring the Material
The practice of a given method is mostly not static over time but rather is subject to innovation and change. Altheide et al. (2008) conceptualize QDA as an emergent methodology where the search process and the interaction between the researcher and the subject matter is central to the forming of a coherent framework that still leaves room for a built in flexibility. Choosing the material that is able to give insights into the respective research questions is a crucial part of the QDA process itself, which needs to be developed systematically. Altheide et. al (2008) suggest to familiarize oneself with the process of creation and context of the information sources and become acquainted with examples of relevant documents as the first steps of diving into the data collection. O’Leary (2004) suggests, as a first step, to create a list of documents one wishes to explore and consider ethical issues, linguistic or cultural barriers, accessibility and authenticity of the material at hand. To assure consistency, the documents should be connected by certain inclusion criteria. Altheide et. al (Altheide et al., 2008) emphasize the iterative nature of the data collection process, that includes searching information based on relevant key words and related terms as well as the construction of protocols outlining basic information on the documents, which are revisited and refined during the progression of this phase of research. During this stage a “first-pass document review” (Bowen, 2009, p. 32) can provide a means of identifying meaningful and relevant content and refine the material composition.
With these guiding practices in mind the material was collected along the following five steps:
i. Familiarize with the topic and relevant documents
ii. Set inclusion criteria and categories for documents
iii. Search for and collect relevant material along established categories
iv. Conduct a first-pass document review
v. Draft a data collection sheet
During these stages this research project was subject to a constant, dynamic and iterative revision process which goes through the five steps in a non-linear manner in order to refine or expand the material collected as well as the information gathered in the data collection sheet.
i. Familiarize with the topic and relevant documents
As this first explorative step is very broad reaching, and dynamic, I will give a brief personal account highlighting key experiences that shaped my engagement with the topic throughout the research project. In order to familiarize with the topic, I attended a seminar at the University of Vienna on Critical Algorithm Studies, which tackled questions like how culture and society influence the creation of algorithms and vice versa or what possible (social) futures are currently being imagined in the AI space. The seminar gave an overview of the academic status quo and refined the questions that could be asked. Furthermore, I closely followed the Asilomar Conference on Beneficial AI (Tegmark, 2017), a conference organized by the Future of Life Institute, online, in which more than 100 thought leaders and researchers from public and private backgrounds met in order to address and formulate principles of beneficial AI. The event gave insight into how AI policies are discussed on an international level but also brought concrete documents like the National Artificial Intelligence Research and Development Strategic Plan of the United States to my attention, as people who were involved in the drafting of this document spoke at the conference.
Furthermore, I attended the ICT2018 conference in Vienna, which discussed the European Union’s priorities in the digital transformation of society and industry. AI was a central theme of the conference with Artificial Intelligence – the European way being the headlining keynote on the first day of the conference. Listening to speakers directly involved in the creation of AI policy provided some perspective on the European approach on engaging AI and how the creation process was moved forward. Furthermore, the European Coordinated Plan on Artificial Intelligence publication date was announced by the Deputy Director-General for Communications Networks, Content and Technology, who is involved in creating the European AI approach. In the context of this event I was able to join the European AI Alliance, a forum engaged in a broad and open discussion of all aspects of AI development and its impacts in Europe. This forum allowed me to passively participate online in the workshop of the European High-Level Expert Group on AI, which discussed the European draft AI Ethics Guidelines and the Policy & Investment Recommendations. The participation in the European AI Alliance was done under full disclosure of my research interests and is only in an observing role.
ii. Set inclusion criteria and categories for documents
The inclusion criteria on basis of which the documents are chosen are intended to reflect the underlying research question, which is ‘asked’ in the systematic analysis of the documents. They also help to insure the material is focused and should mitigate bias resulting from the selection process. Bowen (2009) suggests a wide array of material providing a preponderance of evidence as optimal, although the quality of the document and the evidence they contain should always be the leading consideration. The following set of criteria was developed to frame the documents considered in this thesis:
In order to ensure operationalization and reduce linguistic bias, the document has to be published in English (1). Material has to be published by an official governmental institution (2) and made available to the public (3) in order to prevent ethical or consensual issues and assure availability as well as representativeness. To guarantee consistency and focus on visions of an emerging technology, the publication has to be a policy document, communication or plan asserting a larger policy direction of a given governmental body (4), not an explicit law or regulation concerned with a specific situation or occurrence. AI is a very loosely defined term, often encompassed by or encompassing other issues of the digital age. Narrowing down the material to documents explicitly mentioning Artificial Intelligence (AI) (5) in their title will allow to highlight how AI is conceptualized and delineated from other aspects of digital technologies and why it is thought to be important, new or emergent. The policy push towards managing AI has been a phenomenon of the late 2010’s, with the technologies popularity as well as utility being acknowledged by public institutions predominantly since 2016, which will serve as the cutoff point for the earliest publication date.
An important aspect to consider in regard to this thesis is which global regions should be included in the analysis. A short review of national AI media coverage, made in the preparation for this research project, shows that mostly the U.S., China and Europe are at the center of attention (Berggruen & Gardels, 2018; Dvorak, 2018; Vincent, 2017), with the U.S. and China being displayed as the undisputed frontrunners. Russia is mostly mentioned in context of military capabilities (Simonite, 2017), not technological leadership of innovation, and other regions, or countries with significant public investments or industrial capabilities in AI like Japan, South Korea and Israel, are only sparsely mentioned.
Figure 1. Global AI Publications
Abbildung in dieser Leseprobe nicht enthalten
Note. Data for this figure was obtained from a search of the Web of Science for “artificial intelligence” or “deep learning” or “deep neural network”, for any publication classified as “Article” in the years 2016, 2017 and 2018. The figure displays the top 50 countries, with European countries grouped into a visualization of the European Union. Graphic by author.
In order to build a more refined image of the global AI landscape, it is valuable to also consider some data points in order to build points of reference for a more detailed qualitative analysis. While such data points never paint a complete picture, they can help to narrow down areas of interest. One of the most interesting avenues to consider is the academic output of a given region, which will help to determine where most research is happening. A look into the number of AI related publications (Figure 1) clearly distinguishes three regions: China, Europe and the United States. An article published in the MIT technology review comes to the same conclusion (MITTR, 2018): China and the U.S. dominate the landscape. Individual European countries cannot compete but if considered as a European block, a distinct picture of three significant global regions emerges.
Figure 2. AI startup landscape
Abbildung in dieser Leseprobe nicht enthalten
Note. Adapted from Lemaire, A., Lucazeau, R., Rappers, T., Westerheide, F., & Howard, C. (2018). A Strategy for European AI Startups. München: Roland Berger. The figure shows the Top 20 countries and regions with more than 19 startups listed. European countries are grouped into a visualization of the European Union. Graphic by author.
In order to capture the hotbeds of emerging technology, not only academia but also startups can indicate where talent is clustering, and technology is being brought to market. A global survey of 7,500 companies (Figure 2) shows a similar picture (Lemaire, et al., 2018). While the U.S. is dominating the global AI startup space, Europe and China house a significant part of the global AI startup community.
Figure 3. Global AI Patents
Abbildung in dieser Leseprobe nicht enthalten
Note. Figure 3 shows countries/regions with more than 100 entries in the Espacenet patent database for “artificial intelligence” or “deep learning” or “deep neural network” in the years 2016, 2017 and 2018. This excludes 8329 non-region-specific patents filed with the World Intellectual Property Organization (WIPO). Graphic by autor.
Researching AI often requires significant resources both in terms of specialized talent as well as computing power or amount of available data. Therefore, looking not only at startups but also at the patents filed around the world might shed some light on the global landscape, as these are often filed by large corporations, rather than small companies. A search of the Espacenet patent database (Figure 3) shows China and the United states as the center of intellectual property regarding AI.
The very brief look at the AI space could only superficially give a glimpse at where AI technology is developed, however it gave a certain sense of direction. Europe, China and the United States are at the leading global regions in terms of academic papers and patents published as well as in the size of the startup industry. The regulatory impact in those regions will have the greatest footprint in shaping the opportunities and restrictions put on the AI space. With these guiding directions and the outlined inclusion criteria in mind, Europe and the United States remain the overarching categories, within which the material is collected. This is especially due to language complications that would arise with Chinese documents.
iii. Search for and collect relevant material along established categories
The search led to the following documents:
- Preparing for the Future of Artificial Intelligence (US, October 2016) (NSTC, 2016a)
- The National Artificial Intelligence Research and Development Strategic Plan (US, October 2016) (NSTC, 2016b)
- Artificial Intelligence, Automation, and the Economy (US, December 2016) (NSTC, 2016c)
- Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe (EU, April 2018) (EC, 2018a)
- Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on a Coordinated Plan on Artificial Intelligence (EU, December 2018) (EC, 2018b)
- Ethics Guidelines on Trustworthy AI (EU, April 2019) (AI HLEG, 2019)
These six documents reflect the categories and criteria developed in the previous section. In total there are three documents from the United States and three from Europe.
iv. Conduct a first-pass document review
During the first-pass document review information on both the basic content structure and important information like publication date and authoring organization was gathered. This process yielded the discovery of themes discussed in both documents like questions about national identity which allowed to specifically screen for themes like different images of technology in relation to the building of a common identity. Building up a broad overview of the documents aided in creating an understanding of the similarities and differences that are able to be explored in a comparative framework.
v. Draft a data collection sheet
Altheide and Schneider (2012) see the data collection sheet as a way for the researcher to ask questions of a document. For qualitative analysis of documents Altheide and Schneider (2012) recommend keeping the data collection sheets fairly concise, only covering the basic categories and crucial information about each specific document. Such a document was kept as a companion document for draft notes as well as broad datapoints and basic categories. The basic information categories were title, authoring organization, publication date, length. The sheet also included a brief section of major topics discussed which followed the structure of the table of contents of the respective documents. It also included a brief summary that was later used as the basis for the introduction of each document in the analysis section (Chapter 5 and 6). Furthermore, the sheet included space for initial, broad observations like the previously mentioned emphasis on national identity, which came to light in the first-pass document review.
4.2. Document Analysis
Document analysis is a form of qualitative research in which documents are interpreted in order to find and give voice and meaning to a social phenomenon (Bowen, 2009). In particular, the material described in the previous section can give insight into the guiding regulatory cultures, motivations, processes or underlying values which inform how political institutions make sense of an emerging technology. Silverman (2015) sees documents and texts as natural occurring data. But this data does not speak for itself, rather it must be made to speak by the analyst. This means, in order to make sense of the regulatory cultures ingrained in these documents, the central object of interest is how the documents go about constructing or contesting reality rather than their truth-value or if a certain policy is actually implemented or beneficial. Atkinson and Coffey (2011, p. 90) accentuate this: Texts are constructed according to conventions that are themselves part of a documentary reality. Hence, rather than ask whether an account is true or whether it can be used as ‘valid’ evidence about a research setting, it is more fruitful to ask ourselves questions about the form and function of texts themselves.
With this basic understanding and direction of documents the analytical approach with which this thesis confronts the material is focused on a form of pattern recognition that searches for emerging themes. In order to sharpen the organizational approach of Altheide et al. (2008) to fit this specific research project, the analytical part principally follows Bowen (2009) and utilize thematic analysis (Rivas, 2012) to uncover regulatory themes pertinent to AI regulation in Europe and the United States. The zig-zag approach utilized in this project moves from the initial collection of material, and first-pass review to further material collection and then towards coding.
This process is initiated by reading and re-reading the data several times before formally coding it, to gain a level of immersion in the data that increases the sensitivity to its meanings. During this process memos serve as a way of ‘holding that thought’ and keep note of problems and ideas that arise from the initial reading. These memos are broad impressions that seem significant and help to find commonalities or differences across the different documents. These memos can range from concrete discoveries like the consistent repeating of certain images or words like ‘avoid’ or ‘manage’, which then help to look for similar expressions in the respective documents, to abstract ideas like “policy has impact but it is little mentioned where policy actually had an impact” and which still have to be developed into more concrete themes valuable for comparison. These memos serve as a repository for ideas that might be interesting to explore further like “A section specifically on positive visions – Symmetry”.
Gradually, this process is moved towards a more formal, refined form of coding. The open coding used progressively builds up the codes with more and more data being processed. Words and sets of words are used as labels for chunks of data that capture a part of the literal essence of the data (Rivas, 2012). To some extent, a deductive approach framed the search for themes under the umbrella of Technology and Innovation, Benefits and Risks, Governance and Citizens. These broad areas of interest were enriched by categories like determinism, competitiveness, symmetries, risk handling, or democratic participation, which were inspired by existing research in similar research projects, which successfully used these thematic areas of inquiry to question policy documents through a sociotechnical lens. In particular, Burri´s (2015) exploration of Nanotechnology in Germany and the United States helped to build an understanding of how successful analysis of imaginaries in policy documents can be operationalized and provides direction for coding.
However, Burri´s (2015) structure had to be adapted to fit the material, which resulted in the three aforementioned areas of interest. This development of the concrete structure and the detailed coding process was mostly inductive, meaning the texts were explored along broader questions and the precise themes were suggested empirically from the data, not beforehand. Thereby the material itself helped in guiding the construction and structure of the thesis. This inductive process was aided by asking questions to the material. For example, how frequently or extensive certain issues like national identity are discussed, with what intensity the future prospects of technology are described or what words are used to paint a picture of the future, like “can” or “will” a particular technological development happen. This open coding resulted in a long list of codes that were grouped together resulting in both new themes and themes deduced from adjacent research projects like Burri´s (2015). This process however was not linear, and themes emerged during the coding process, not just at the end. After every new document was processed, especially after shifting from U.S. to EU documents, interesting themes like the focus on avoiding or managing risk became apparent and warranted to circle back to look for more empirical clues in the documents.
Most of this process was done with the help of QDA managing tool MaxQDA. However, the data collection sheet always served as a quick reference. When deemed necessary, like in the case of managing vs. avoiding risk, a version of concept maps (Rivas, 2012) was utilized by choosing a theme that seems significant in some way, which is put at the center of the map. The map is then split into two (U.S.-EU) and contrasting qualities and features that emerged out of the codes would be put on the corresponding sides of the map. This helped to visualize the comparative contrast and make sense of the codes and aid in the writing process for the comparative part of the thesis. For other less extensive parts of the comparative analysis, grouping and color coding in MaxQDA was relied upon.
5. Artificial Intelligence and the United States
With about half a trillion U.S. dollars in total gross domestic research and development (R&D) expenditures (National Science Board, 2018) the United States has led the world for decades in R&D spending. It is also home to a majority of the largest AI companies as well as the biggest AI startup community (Lemaire et al., 2018). While the U.S. government, as a whole, does not have a coordinated national strategy concerning its investments regarding AI technology or its approach to potential societal challenges of AI itself, the Executive Office of the President spearheaded the push to govern AI technology by releasing a series of three comprehensive documents in 2016 and 2017 as one of the first governments to do so.
The reports followed three previous White House reports on big data released through 2014-2015 and represent a part of the larger effort to build a policy approach that addresses cutting edge technologies. The documents were developed by the National Science and Technology Councils (NSTC) Subcommittee on Machine Learning and Artificial Intelligence. The Office of Science and Technology Policy (OSTP) led a series of public outreach activities (Brummel & Clamann, 2018) to engage with experts and the public to obtain information for the report (NSTC, 2016a, p.1) which included the following themes:
- AI, Law, and Policy (May 24, 2016)
- AI for Social Good (June 7, 2016)
- Future of AI: Emerging Topics and Societal Benefit at the Global Entrepreneurship Summit (June 23, 2016)
- AI Technology, Safety, and Control (June 28, 2016)
- Social and Economic Impacts of AI (July 7, 2016)
At these events speakers from academia, ICT companies like Microsoft, Google as well as government agencies like the Defense Advanced Research Projects Agency (DARPA), Intelligence Advanced Research Projects Activity (IARPA), OSTP and various state departments, participated at keynotes and panel discussions which explored the potential future of AI. These workshops were labeled as ‘public workshops’, for which members of the public could register and keynotes or panel discussions were either livestreamed or a video was uploaded.
The first report, Preparing for the Future of Artificial Intelligence was aimed at generating recommendations related to AI regulations, public R&D, ethics, and safety. Its companion report, the National Artificial Intelligence Research and Development Strategic Plan, outlined a strategy for publicly funded R&D in AI. Two months later a third document titled Artificial Intelligence, Automation, and the Economy, focused more specifically on the impact of automation on the economy and society, while making policy recommendations that are intended to increase the benefits of AI and mitigate its costs. Due to its leading position relative to other global regions and countries, these documents also serve as an impulse to the global governmental community to either develop strategies of their own or position themselves in the context of AI technology, which makes the series of policy papers a crucial point of analysis.
Since 2016, no coordinated effort was made to put AI investments or legislation in a comprehensive framework (Future of Life Institute, 2019), but AI was featured in the National Security Strategy (United States, 2018b) in relation to its role in helping the U.S. lead in technological innovation as well as its role in information statecraft, weaponization, and surveillance (Future of Life Institute, 2019). Furthermore, for the first time AI was specifically discussed in the National Defense Strategy (United States, 2018a) where it is labeled as one of the technologies that will change the character of war (Future of Life Institute, 2019). In addition to some AI-related bills being introduced at the state and local levels, especially concerning self-driving mobility, several bills that address the technology were introduced in the United States House of Representatives and Senate which mandate advisory committees and panels to explore advances in AI (Future of Life Institute, 2019). However, these initiatives remain fragmented throughout the government and the three documents released during the Obama Administration remain the most comprehensive and wide-ranging, high-level policy papers produced thus far.
The following sections will subsequently discuss each of these three documents along the three previously outlined dimensions: First, the dominant visions, images and ideals associated with the technological innovation in AI. Second, the expectations of benefits and risks projected onto AI. And third, the imagined role of governance of AI in the policy documents as well as how society is imagined to be part of the assessment, development and regulation.
1 Machine learning is the ability of algorithms to learn by processing data without being explicitly programmed and therefore to adapt to new circumstances, distinguish and predict patterns (Russell & Norvig, 2010). The term machine learning encompasses neural networks but also includes other computational learning techniques like reinforcement learning or evolutionary algorithms.
2 Simple neural networks have been developed since the 1950s, however the use of multi-layered ‘deep’ neural networks have driven the most significant advances of the last decade (Bostrom, 2014).
3 Each node receives a signal either directly from the input or other nodes, computes a weighted sum and if a certain threshold is reached it sends an output signal. These weights and thresholds are trained by looking at their contribution to the difference between the final network output and the correct answer and changing them accordingly.
- Quote paper
- M.A. Stefan Raß (Author), 2019, Visions of Artificial Intelligence, Munich, GRIN Verlag, https://www.grin.com/document/1001872