Table of Contents
2. State of research
3. Agency theory
3.1. Agency theory in political sciences
3.1.1. Theoretical context
3.1.2. Principal-agent problems
3.1.3. Control methods
3.2. Agency theory with artificial agents
3.2.1. Definitions and agency of algorithms
3.2.2. Principal-agent problem with artificial agents
3.2.3. Control methods
4. Analysis of Automated decision-making in the public sector
4.1. Case-study: France
4.1.1. Public institutions in France
4.1.2. Control methods
4.1.3. Algorithms in the French public sector
4.2. Changes in agency when ADM is used in the public sector
4.2.1. Potential risks of using artificial agents
4.2.2. Responsibility of decisions taken with the help of ADM
4.2.3. Consequences on the organisation of administrations
4.3. Changes in control methods when ADM is used in the public sector
4.3.1. Selection and development of the agent
4.3.2. Supervision and monitoring of the agent
7.1. Graph: The technological framework behind Automated decision-making
7.2.1. Interview 1
7.2.2. Interview 2
7.2.3. Interview 3
7.2.4. Interview 4
Since 2016, France’s second biggest city Marseille has created a Big Data Observatory for Public Tranquillity1, a platform collecting and analysing data to support the work of the municipal police. One of its four main goals is to anticipate future or probable situations by using an algorithm to assess risks and planning adequate responses2. This initiative is an example of predictive policing, which is “the application of analytical techniques to identify likely targets for police intervention and prevent crime” (Lum and Isaac 2016, 16). Although such practices can have positive effects on the efficiency of police forces and therefore on the general security of the population, they have been criticized by human rights organisations for reinforcing discriminating biases, threatening fundamental rights and personal data (La Quadrature du Net 2018). For example, a concern for ethnic minorities is that place-based predictive tools will prioritise policing in areas that are already overpoliced (Williams and Kind 2019, 25). The use of analytics can also lead to so-called ‘chilling effects’, where the constant feeling of being observed can result in self-censorship and changes in behaviour of citizens (Schlehahn et al. 2015, 2).
Predictive policing is one aspect of a technique that has been increasingly developed in public administrations in recent years, which we refer to as automated decision-making (ADM). ADM is a type of algorithm which supports decision-making and combines advanced analytics and data mining to make predictions. The degree of automation can vary, depending on the degree of human involvement in the decision-making process: we can then talk of ‘semi-automated’ or ‘augmented’ decision-making. As these algorithms sometimes rely on machine learning (ML) – which means that they can automatically and independently learn overtime, they are defined as being part of the field of artificial intelligence (AI). AI is here to be understood as the capacity for a machine to resemble human intelligence abilities (Castelluccia and Le Métayer 2019, 4). ADM has been developed in various public sector fields, from justice to healthcare, and is increasingly helping public agents by delivering predictions and analysis that they can leverage to make their decisions. This technique involves three main stakeholders: the programmer of the algorithmic system, who can be directly employed by the administration or be working for a private company; the user, who is the public agent operating the ADM system; and the individuals affected by the decisions made using ADM. Although the development of this technology can impact very large aspects of the citizens’ lives when it is used by the public sector, ranging from welfare benefits to medical operations or legal decisions, there appears to be a lack of debate surrounding the possible transformations it can bring to administrations. This paper therefore chooses to focus on the consequences on the governance and responsibility of administrations increasingly relying on algorithms to make their decisions. Does the introduction of ADM in public administrations transform their agency? If so, why does this change occur and how does it impact the control methods required to supervise the actions of administrations? The chosen approach to answer these research questions is the agency theory, which is suited to deal with delegation, specifically between actors from different contextual backgrounds (public institutions, private companies, citizens…). This theory relies primarily on the relationship between an agent and a principal: the agent is mandated to make decisions or actions on behalf of, or that impacts, the principal. A principal-agent problem occurs when these two entities have conflicting interests and the principal cannot directly ensure that the agent is acting in their interest. To face this problem, control methods can be put in place by the principal to make sure the agent’s values and interests align with their own. Principal-agent relationships, and their corresponding control methods, exist in the public sector, and this paper aims at studying how they may be impacted by the introduction of an artificial agent, through the implementation of ADM. France has been chosen as the case-study for this topic, as it has put in place relevant laws and public institutions in order to deal with public ADM. While the government published in 2018 its AI strategy entitled ‘AI for Humanity’ (Villani et al. 2018), the legislative branch passed several laws to regulate the use of AI in the public sector, and an institution called Etalab is providing support to administrations in making their algorithms more transparent and understandable3. The method chosen to investigate this issue is based on a literature review, as it is appropriate to approach a case-study. This includes scientific papers for the technical aspects, from computer sciences to social and political sciences, as well as reports from governments, international institutions and private companies. More general literature, such as articles and blog posts are used for information on the use of ADM in France and the public debate surrounding it. Finally, the methodology also includes semi-structured interviews led with experts working on the topic of ADM in the public sector. Some of the interview partners are data-scientists working at the IT-consulting company Capgemini4 and were contacted internally. The others include a member of Etalab, the French agency in charge of opening public data and public algorithms; and a member of the French National Council for Digital. These interview partners were contacted using contact information found online. An active effort was made to strike a gender-balance in the people contacted for an interview, although this is not represented in the final list of interviewees, which is dependent on the positive answers received. The interviews were transcribed in written form and can be found in the annexes of this paper.
This paper is divided in two main parts. The first part introduces the agency theory and its application to two relevant aspects: the agency theory in the public sector and the agency theory involving artificial agents. The second part aims at providing answers to the research questions, by discussing the changes in the agency of the public administrations, as well as the changes in the control methods used to monitor these administrations. Finally, the conclusion summarizes the answer to the research questions, exposes the implications and limits of this paper and offers leads for possible future research on this topic. As this paper brings together notions from various different fields, it contains several subchapters in order to make its structure more visible and more understandable5.
2. State of research
In the last decade, AI has been a topic of research for an increasing number of disciplines. It used to be limited to computer science, but, as it increasingly shaped and influenced our analogue world, AI started to be of interest for political, legal and social scientists alike. In the last decade has emerged a new area of research called Critical Algorithm Studies which investigates algorithms as social concerns6. Researchers study the legality of decisions made using AI systems (Sauvé 2018, Martini 2019, Duclercq 2019), and the political and social implications of ADM for individuals and our society as a whole (Rouvroy 2011, Ensmerger 2012, Morozov 2013, O’Neil 2016).
The agency theory has been used in political sciences since the 1970s, in order to investigate the power relations between the electorate, the elected politicians and the public administrations (Weingast 1984, Miller 2005, Lane 2013). The theory has also been used in computer science, in order to explore the cases of principal-agent problems when companies are working with artificial agents (van Rijmenam et al. 2019). Some computer scientists have been working on the issue of teaching AI systems what to value (Dewey 2011, Yudkowsky 2011, Soares 2016) in order to make sure that they their interests stay aligned on human interests (Soares and Fallenstein 2015). Others have been studying the ethics of making decisions with the help of AI, and the possibilities of algorithms taking into account the desires, goals and preferences of individuals, while simultaneously learning about what those preferences are (Abel et al. 2016). This paper offers to close the gap between agency theory in political sciences, and the study of artificial agents in computer science. The goal is to investigate whether the introduction of AI in administrations has an impact on the parameters of the agency theory, and if so, what the consequences of this change are.
3. Agency theory
3.1. Agency theory in political sciences
In this part, the agency theory in political sciences is presented independently of the problematic of ADM and will serve as the theoretical basis for the rest of the paper.
3.1.1. Theoretical context
In political sciences, the agency theory is part of a larger set of models describing social structures known as “rational choice” (Kiser 1999, 146). Rational choice is “a theory of action that sees individual self-interest as the fundamental human motive and traces all social activities back to acts of rational calculation and decision-making that are supposed to have produced them” (Scott 2015). Social structures are then understood as an aggregation of individual interests, and institutions as tools to achieve them.
The agency theory applied to public administrations also draws from the findings of the sociologist and political economist Max Weber, particularly known for his work on bureaucracy. One of the significant issues that Weber raised was the question of whether it is possible for elected politicians to control appointed bureaucrats, and, if not, if the power of these bureaucrats could be a threat to democracy. Weber identified that the power of bureaucratic organisations stems from their knowledge: both the technical knowledge and the knowledge acquired through their experience on the service (Weber 1922 1978, 225). This leads the political master – whether it is “…the ‘people’ equipped with the weapons of legislative initiative, (…) or a parliament elected (…) or a popularly elected president…” – to always be, “vis-à-vis the trained official, in the position of a dilettante facing the expert” (ibid, 991-992). In addition, this power is kept through administrative secrecy as well as through the dependence of the ruler, or the people, towards the bureaucracy (ibid, 994). It is to explore this issue of the dependence of elected politicians towards bureaucrats that political scientists began to apply the agency theory to contemporary political science (Kiser 1999, 154) in the 1970s. Weber also explored the ways developed by rulers to maintain control over bureaucrats, such as the process of selection, monitoring and creating interdependence relations (Weber 1922 1978, 264, 224, 1043). These were later identified as possible control methods to solve principal-agent problems, when political scientists started using the agency theory. According to Kiser, the main difference between the use of agency theory made by economists and Weber’s approach can be found in the role that Weber gives to non-instrumental motivations, such as culture and the legitimacy of the ruler (Kiser 1999, 161-162).
In this paper, we apply the principal-agent theory to the public sector of contemporary liberal democracies, where there is a separation of powers between the different bodies of the state, as introduced by Montesquieu in his book Spirit of the Laws in the 18th century. Jan-Erik Lane defines the public sector as the “state general decision-making and its outcomes” (Lane 1993, 14). It has both an authority and legislation side, most often embodied by the government and parliament, and a budget and allocation side, managed by public administrations. Public administration is a concept that has no good definition on which every scholar agrees, but we will retain here the one put forward by Dwight Waldo: “public administration is the organization and management of men and materials to achieve the purposes of government” (Waldo 1955, 2). Additionally, the public sector is said to follow the concept of rational action, as its actions are designed in a way to maximize the achievement of public goals. However, according to Waldo, there is a discrepancy in the awareness of these goals among the agents of the different levels of administrations, with top bureaucrats being more aware of them than machine-operators are (ibid, 4). Moreover, the internal and external environments of public administrations are mostly nonrational, as each person has their own cultural conditioning and personal idiosyncrasies (ibid, 13). It is then the task of the leader, who is highly trained to achieve such goals, to build in rationality in her organisation, so that these goals can be achieved, even if all agents may not know nor care about such goals (ibid, 4).
Unlike private organisations, public administrations are said to be acting according to the public interest. This is a notion that is difficult to define, as the two concepts it is made up from are in constant tension. Interest, selfish or altruistic, is what a single person wishes, whereas the public refers to an entity which is collective (Lane 1993, 7). On the one hand, for administrative rationalists, such a public interest exists and can be determined through the rationalization of the decision-making process. Following this line of thought, the ideal for an administration would then be automation, and “the administrative man turns out to be a robot” (Schubert 1957, 349). On the other hand, administrative realists argue that public administrations are made of the aggregation of the individual interests and egoistic behaviours of administrators (Olson 1965), which makes it impossible to define one single public interest that all should be working for. Therefore, Lane argues for a definition of the state “as a set of institutional mechanisms for the aggregation or coordination of interests into collective decisions and outcomes” (Lane 1993, 10). At the heart of the operations that take place in public institutions are a set of contracts which follow the principal-agent framework. Politicians, bureaucrats and administrators are the agents who are employed to act in the interests of principals, who are the citizens (ibid, 7).
3.1.2. Principal-agent problems
The agency theory deals with the social relationship of delegation: when an actor, the principal, who lacks the skills and capacities to do a particular action, obtains the services of another actor, the agent, to do that action, in return for remuneration. This agreement can happen one singular time, or over a long period of time. For Coleman, the principal, when using his resources – for example, money – to achieve his interests, even though he lacks other resources – such as particular skills, seeks “a kind of extension of self” (Coleman 1990: 146). Whenever there are considerable transaction costs due to the economic interaction between two parties as well as the coordination difficulties involved in collective action, principal-agent problems arise (Lane 1993, 114).
The political system is made of a vertical chain of principal-agent problems: relations between the electorate and the elected politicians, between these politicians and bureaucrats, and finally, between the different layers of bureaucracy down until the lowest-level bureaucrats who deliver the services directly to citizens (Moe 1984, Michaud 2017). Most actors of this hierarchy have to play a dual role, both as principal and as agent (Moe 1984 , 766). The main assertion of the agency theory is that, when the agent has an information advantage over the principal, she uses it in order to obtain benefits that she would not get if the principal was as informed as her (Michaud 2017). Asymmetric information means that it is difficult for the principal to know whether the actions taken by the agent are the correct ones or the desired ones, or when it is acceptable for the principal to call for alternative courses of action that should take the agent. For example, if a politician argues that the actions desired by the electorate were not feasible under the given circumstances, it can be difficult for the citizens to check if that is true (Lane 1993, 115). Similarly, it is difficult and expensive for politicians to have precise information on the actions and interests of bureaucrats, such as their true performance, personal goals and policy positions (Moe 1984, 766).
Asymmetric information can lead to two main problems for principals employing agents: moral hazard and adverse selection (Lane 2013, 86). Moral hazard happens when the principal does not know if the agent will do her best to achieve the required task, while adverse selection stems from the fact that, when selecting an agent, the principal is not informed enough on the abilities of potential agents to ensure that his decision is the right one. In politics, moral hazard can be seen in the inability for the electorate to know which actions should be taken by politicians in order to fulfil the promises that they were voted on. For example, adverse selection happens when the electorate votes for a politician, who ends up following his own interests rather than the interests of the voters (Lane 1993, 115). In bureaucracy, asymmetric information can allow a long-term contracting bureau (agent) to capture rent without necessarily accomplishing the efforts expected by the principals. This can lead to inefficiencies or too much employment (Lane 2013, 88). New public management7 aspires to fight such inefficiencies by introducing shorter length of contracts and increasing outsourcing. However, this brings forward a new layer of principal-agent problems between the bureaux and the private companies they employ. The losses imposed on the principals to align the agent’s interests on their own are called agency costs. With politicians and bureaucrats as agents, the electorate, being the principal, has to face two kinds of agency costs. Firstly, the citizens have to pay for the remuneration of the agents – political and administrative elites – who are performing the tasks for them. Secondly, they have to cover the indirect costs that are derived from the mistakes made by the agents or from dire performances. These costs can be translated as economic difficulties for the citizens, or even losses of national assets in the context of wars (ibid, 86).
One of the issues regarding the agency theory in politics is that there is “no clear consensus about who the principal that is supposed to control bureaucratic agencies [is]” (Kiser 1999, 156) because there are multiple principals. For example, in democratic politics where there is a party competition, none of the parties wants the others to have too much control over the bureaucrats. Separation of powers means that not one institution has the complete responsibility of controlling administrations. In the end, this competition tends to strengthen the power and autonomy of bureaucrats, as they can use information asymmetries to their own advantage (Moe 1984, 768-769).
3.1.3. Control methods
The agency theory introduces control methods that principals can use to ensure aligning interests with agents. In economics these methods most often concern the remuneration as well as the recruitment of agents.
For citizens, control methods are the way to ensure that their interests are respected by the politicians whom they elect. However, monitoring the executive has impossibly large costs for the constituents. Technology could be argued to improve monitoring through making it easier to verify if the agent’s actions align with the public interest. However, Miller argues that recent examples in politics show that this is not the case and that “the information asymmetry between executive and public is too profound to be resolved by better monitoring” (Miller 2005, 208). In contemporary liberal democracies, political opportunism is limited by tools of the rule of law8 such as, parliamentary opposition, civil society involvement, limits of the mandates and the judicialization of politics (Lane 2013, 88). Having a higher number of public institutions introduces competition and a lower asymmetry of information for the electorate. In parliamentary regimes, the parliament and the government are controlling each other, which lowers the costs of agency for the citizens. Additionally, the public law restrains the actions of ‘bad politicians’ through concepts such as predictability, transparency, counter-weighing powers and fairness (ibid, 89). The justice system also allows for a better control over the actions of the agents, such as in the member states of the European Union (EU), where the Charter of fundamental rights introduces a ‘right to good administration’. This means that every person has the right to have his or her affairs handled impartially, fairly and within a reasonable time, and also includes the obligation of the administration to give reasons for its decisions9.
For elected politicians who oversee the actions of public administrations working for them, control methods include budgets, slack, policy, career opportunities and security (Moe 1984, 764). The personal remuneration of bureaucratic agents rarely comes into play, as, contrary to private organisations, public administrations do not have any economic residual: there is no economic surplus or profit, regardless of the agent’s performance. However, in order to control the work of bureaucrats, politicians, if they are not concerned with economic efficiency, can use slack, which is the difference between the budget allocated and what the bureau actually spends. This leads bureaucrats at the top to encourage other civil servants to be more productive because they know that they will then be in the position of capturing the remaining slack. This method can also be used by subordinates towards higher-ranking bureaucrats, as they ask for more budget than they need. This can lead to a vicious circle of increased inefficiency, which in turns requires an increased internal control (ibid, 763). Additionally, individual selection as a control method is not something that politicians can exercise, as the hiring, firing and promotion of bureaucrats is usually structured by formal career systems (ibid, 768). Politicians are however the decision-makers with the authority to determine whether or not to employ a certain bureau (ibid, 761). They can also reward, with budgets and programs, administrations that do well, and sanction those judged to be doing a bad job (Weingast 1984, 155). However, researchers argue that elected politicians do not actually need to constantly monitor every action taken by the bureaucrats, as they have what Weingast calls a “decibel meter” (ibid, 182): they can evaluate agency performance by listening to their constituents’ feedback, instead of through in-depth study. Similarly, Miller calls it a “fire alarm” that constituents can pull when bureaucrats fail to supply them with the services they want (Miller 2005, 210). Then, as long as the population is not complaining about the actions of administrations, which means their re-election and therefore their own interests are not threatened, politicians can rely on bureaucratic discretion and do not need any specific control method (Moe 1984, 767).
Applied to political sciences, the agency theory therefore reveals the dynamics between several layers of principals and agents, from the electorate until the different levels of bureaucratic hierarchy. Some of the findings support Weber’s views on bureaucracy, but studies have shown that his fear of bureaucrats being a threat to democracy was mostly unwarranted, as politicians and the electorate do have ways to control and monitor their actions. This theory allows political scientists to investigate the Weberian asymmetry, but it additionally reveals that institutional interdependence is more the norm than previously thought.
3.2. Agency theory with artificial agents
In this part, we investigate to which extend the agency theory can be applied to artificial agents, in particular algorithms of ADM.
3.2.1. Definitions and agency of algorithms
AI and ADM have already been defined in the introduction of this paper. ADM is a specific type of algorithm, which the Cambridge dictionary defines as “a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem”. However, in the field of mathematics, algorithms have been difficult to define with precision. Gurevich argues that the notion of algorithm cannot be rigorously defined, as it is an expanding notion. He draws a parallel with the notion of numbers:
“Many kinds of numbers have been introduced throughout history: natural numbers, integers, rationals, reals, complex numbers, (…) etc. Similarly, many kinds of algorithms have been introduced. In addition to classical sequential algorithms, in use from antiquity, we have now parallel, interactive, distributed, real-time, analogue, hybrid, quantum, etc. algorithms. (…) The notions of numbers and algorithms have not crystallized (and maybe never will) to support rigorous definitions.” (Gurevich 2012, 32)
In Algorithm=Logic+Control, Kowalski describes algorithms as being made of two components: logic, which dictates what should be done, and control, which specifies how it should be done (Kowalski 1979, 435). Finally, Castelluccia and Le Métayer define an algorithm as “an unambiguous procedure to solve a problem or a class of problems (…), composed of a set of instructions or rules that take some input data and return outputs” (Castelluccia and Le Métayer 2019, 3). Moreover, the algorithms we study in this paper cannot be solely defined in a mathematical perspective. As they are designed by humans, they cannot be isolated from their political, social and economic context (Ensmenger 2012, 25). Indeed, technological artefacts are not neutral but inherently political as they might support certain political structures or facilitate certain actions (Winner 1980).
Algorithms are distinguished according to their level of autonomy in the determination of the rules which constitute the sequence of operations. Deterministic algorithms are sequencings of fixed operations, which always produce the same result when given the same input. Probabilistic algorithms are also sequencings of fixed operations, but they introduce randomized treatment which leads to varied and random results. ML algorithms determine, from the analysis of a given database, the rules to follow in order to attain the goal set, without needing a human to describe what those rules should be. Instead of being hand-coded by a programmer, the algorithm is generated automatically from data (Castelluccia and Le Métayer 2019, 3). It does this by identifying patterns and translating them in statistical models that give informed estimates on the correct categories of newly input data. The behaviour of such an algorithm can therefore change over time. When the data is qualified, the learning of the algorithm is said to be supervised, otherwise it is unsupervised10. Unsupervised algorithms determine the classifications which will be the basis of the rules, and those classifications are constantly adjusted depending on intermediate results (ENA 2019, CNIL 2017, Thapa 2019). Although different types of ML systems can potentially be found in the public sector, supervised ML is today the most relevant to the government sphere (Thapa 2019, 12).
In this paper, we consider algorithms that help in, or automate, the process of decision-making, most of the time through predictions. For this type of algorithm, we will be using the extended working definition enounced in the algo:aware report Algorithmic decision-making:
“A software system (…) that, autonomously or with human involvement, takes decisions or applies measures relating to social or physical systems on the basis of personal and/or non-personal data, with impacts either at the individual or collective level” (algo:aware, 7)
These algorithms can be categorized depending on their level of human involvement. Decision-support algorithms simply inform a human decision-maker, without making the decision, while decision-making algorithms result in a decision that is fully automated. This type of algorithm can itself be divided in three parts: human-in-the-loop algorithm, which follows specific human instructions; human-on-the-loop, where an overseeing human can override the algorithm; and fully autonomous algorithms, operating without human supervision (algo:aware, 11-12).
Graph 1. Levels of autonomy and of human involvement in algorithms
Abbildung in dieser Leseprobe nicht enthalten
These types of algorithms and ML processes belong to ‘weak/narrow AI’. Unlike the ‘general/strong AI’ depicted in apocalyptic science-fiction scenarios, narrow AI solves particular problems, where it can surpass the human brain. However, its intelligence only concerns this one problem, unlike the human general intelligence. Although less threatening than general AI, narrow AI can still have an important impact on the lives of individuals, especially in the public sector. Its use in public administrations does not only concern the agency of bureaucracy, but also have repercussions on the other principals: the political decision-makers and the citizens. It also involves an additional stakeholder, the programmer, who can then, knowingly or unknowingly, have consequences on the interests of administrations, politicians and citizens.
The use of the term ‘agent’ for an algorithmic system is debatable. Indeed, from a juridical point of view, a machine cannot be considered as an agent, as it is neither a natural nor a legal person. However, in this research we focus on the political sciences point of view of the agency theory, which defines the agent as an entity delegated to act on behalf of another, the principal. In the case of ADM, the program is constructed with the explicit intention of acting like a person, in the name of another one. Rammert backs this view of algorithms by describing how technical artefacts, such as computer programs, have “[lost] their passive, blind and thumb character and [gained] the capacities to be pro-active, context-sensible and cooperative” (Rammert 2008, 4). He argues that, when parts of a technical system have the ability to behave in a flexible way depending on interactions with their environment and when they search for new information to choose their behaviour, then it makes sense to use the vocabulary of agency in the world of artefacts (ibid, 5). Carl Mitcham develops two waves of technological artefacts, whose agency is of different nature. The first wave is made up of intentionless artefacts, such as simple and controllable algorithms, which simply extend human agency and therefore only have “secondary agency”. The second wave introduces artefacts as “delegated agents”: they act with delegated agency within a range of possible actions (Mitcham 2014: 13-21). This appears to be the case of the algorithms used nowadays, which can then be considered as agent-like artefacts. Finally, just like human agency is distributed inside a public administration, for example between the different civil servants, there is a “distributed agency” between humans and algorithms, as well as between algorithms themselves (Rammert 2008, 17). With ADM, it is not one human on her own who makes the decision anymore, but it is also not a machine acting alone: the agency and responsibility are delegated “across a mixed network of human and algorithmic actors” (Mittelstadt et al. 2016, 12).
3.2.2. Principal-agent problem with artificial agents
Nick Bostrom analyses the control problem which arises when introducing artificial agents and divides it into two different principal-agent problems. The first one occurs between two humans, when the person who commissioned the development of an AI program is different than the one programming it. This is a classic principal-agent relationship, which can be controlled through safeguards such as arduous personnel selection and supervising. These come at a cost but to ignore them could lead to complications if the developed AI algorithm does not follow the rules that the commissioner intended. The second principal-agent problem is more specific to artificial agents as it is the one that occurs between the programmer and the intelligent system that she created. The programmer faces this issue when wanting to ensure that the system built will not harm the project’s interests (Bostrom 2016, 155-156). Especially in the case of ML algorithms, it can be difficult to make sure that the program follows the interests of the project for which it has been created. There are three types of opacity that can make it difficult to understand the workings of the algorithmic system and therefore lead to asymmetric information and principal-agent problems. The first type of opacity is intended opacity, with ‘black box algorithms’11 that do not reveal how they chose a certain decision. The second type of opacity is due to technical illiteracy, which means that even if the code of an algorithm is published, only a few specialised programmers are able to understand it. Finally, the third type of opacity results from the complexity of algorithms, especially ML systems, which makes them nearly impossible to audit without major costs (Burrell 2016, 3-5). The first and second types of opacity are problematic for the people affected by ADM. The third type of opacity is due to the fundamental nature of decision-making algorithms and also raise an understandability problem for the programmers. Indeed, as “machine optimizations based on training data do not naturally accord with human semantic explanations” (Burrell 2016, 10), programmers may then be unable to know if the actions taken by the machine follow their intended goal.
When it comes to their interests, artificial agents, by nature, cannot be considered in the same way as human agents. In the case of narrow AI, an algorithm cannot have own interests that it would put forward instead of executing the tasks it has been programmed for. Nevertheless, an interesting debate emerges from the fact that the calculations it does and the rules it follows are not always clear. With decreasing human supervision, algorithms increasingly have the risk of spanning out of human control, diverging from the path they were intended to follow. One famous example of this was the chatbot Tay developed by Microsoft to have conversations with Twitter users. The chatbot account had to be shut down after just sixteen hours as it had started tweeting racist and violent messages, tarnishing the company’s public image (Hern 2016). Whether by design or by accident, algorithmic systems can end up going against the interests of their sponsor. Therefore, it is an important issue for programmers to manage to introduce the right values in algorithms, so that both the agent’s and the principal’s interests can be respected.
One way to designate values for an algorithm to follow is via a utility function. This consists in assigning a value to each possible outcome and to instruct the agent to maximise the expected utility. The agent will therefore select the action with the highest utility (Bostrom 2016, 226). However, operational parameters are specified by programmers with desired outcomes in mind “that privilege some values and interests over others” (Mittelstadt et al. 2016, 1). Additionally, this assignment of values gets more problematic as the complexity of the goal increases. The difficulty lays in the definition of the goals to give to the artificial agent, because human values - such as happiness, justice, human rights or democracy - are complicated to translate into computer code. Indeed, the challenge that programmers face is the fact that “the definition must bottom out in terms that appear in the AI’s programming language, and ultimately in primitives such as mathematical operators (...)” (Bostrom 2016, 227). This leads to a “value-loading problem” which intensifies as the artificial agent becomes increasingly intelligent and which should focus the attention of programmers and mathematicians in the years to come. Indeed, it will have to be solved before an AI has developed enough reason to understand our human intentions and can therefore refuse to align with our values (ibid, 229). Similarly, a software application lacks “common sense” (Martini 2019, 59) and cannot independently reflect on its results, as it only establishes connections between data and combines old knowledge into new. The task of weighing up ethics and moral cannot be transferred to them and must remain under the control of humans (ibid, 49).
3.2.3. Control methods
Control methods are required in order to make sure that the actions of the artificial agents align with the interests of the principal, in this case: the programmer. One way to control the actions of an algorithm is to first develop it in a laboratory, a small field study or a ‘sandbox’, which is a controlled and limited environment, and to observe its behaviour. The algorithm is allowed to leave this secure environment once it behaves in a friendly, cooperative and responsible manner. Such a behavioural method allows programmers to make projections on its future reliability (Bostrom 2016, 142, 157). The difficulty facing computer researchers at the moment is how to make highly reliable agents that not only behave well in test settings, but also continue working as intended in application, aligned on the goals set by humans (Soares and Fallenstein 2017, 105). Additionally, one of the challenges ahead is defining what a ‘good decision’ is. In decision theory, this requires identifying and defining ‘available actions’, as well as their consequences, and translating them in computer language (ibid,107).
For programmers to understand how an algorithm came to a certain result and therefore control its functioning, it is important for ADM systems to be transparent. This means that the datasets, as well as the processes of data gathering and data labelling should be documented to the best possible standard, in order to identify the reasons of a certain decision and help prevent future mistakes. This traceable transparency can additionally facilitate audits and the explainability of algorithms, for principals other than programmers (AI HELG 2019, 18). The difficulty with transparency is that algorithmic systems only reveal what they are designed to reveal. In a situation where conditions are too different from the ones originally programmed, the information given by the algorithm may be inadequate to the task. Moreover, as datasets and functions of ADM systems expand, information tends to accumulate, overloading actors and making it increasingly complicated to separate signal from noise. This creates additional informational asymmetries (Lipartito 2010, 36). It makes it difficult, or even impossible sometimes, to notice errors in the algorithmic systems, even when testing is done properly. The errors can then be revealed later on, as the algorithm is used for real-life cases and can have dire consequences. This difficulty is all the more present in ML systems and, in some areas, researchers are choosing to entirely renounce using them because of the risks they contain. For example, with self-driving cars, some programmers consider ML to be unreliable and possibly dangerous because they do not necessarily know what it learns. They find it safer to use manual programming to make sure they can control how the system develops (Both 2014). For principals other than the programmers, it is also important to reduce the opacity of algorithms in order to control them. For intended opacity, regulations can request more transparent algorithms (Martini 2019, 44), while for technical illiteracy opacities, education and development of talents can increase programming literacy. However, the third type of opacity is more complex to tackle. Burrell argues that there is then a need for cooperation between experts from various fields - legal scholars, social scientists, computer scientists - as well as people who experience the consequences of the algorithmic decisions, some of which programmers might not have anticipated (Burrell 2016, 10).
1 https://www.marseille.fr/prevention/sécurité-et-prévention/big-data-de-la-tranquillite-publique (April 27, 2020)
2 An official document detailing this project was published by La Quadrature du Net and is accessible here: https://www.laquadrature.net/files/CCTP_ObservatoireBigData_Marseille.pdf (April 27, 2020)
3 More information on these laws and on Etalab can be found in the part: 4.3.1. Selection and development of the agent.
4 The research for this paper was done while the author was working at Capgemini, which is why it is cited in this paper as an example for a company developing ADM for public administrations.
5 A similarly detailed structure is often found in works applying AI to political, legal or social sciences, such as in: Martini 2019.
6 A reading list on this area of studies has been published by Gillespie and Seaver on: https://socialmediacollective.org/reading-lists/critical-algorithm-studies/ (April 13, 2020).
7 New public management “refers to the broad reform of public-sector management since the 1980s by introducing private-sector practice, strengthening line management, establishing systems of performance management, and exposing public-sector organizations to competition” (Heery and Noon 2017).
8 The rule of law “...[embodies] three concepts: the absolute predominance of regular law, so that the government has no arbitrary authority over the citizen; the equal subjection of all (including officials) to the ordinary law administered by the ordinary courts; and the fact that the citizen's personal freedoms are formulated and protected by the ordinary law rather than by abstract constitutional declarations” (Law and Martin 2009)
9 EU Charter of Fundamental Rights, art. 41.
10 In the case of supervised ML, the programmer defines the input and output data and knows which results he wants to machine to obtain. In unsupervised ML, there is only input data, without any corresponding output variables, and there is no correct answer: the machine discovers itself the interesting structure in the data.
11 An algorithm is defined as being a black box when we do not have any knowledge of its internal workings: only inputs and outputs can be studied to understand how it functions.
- Quote paper
- Hortense Fricker (Author), 2020, Automated decision-making in the public sector. Artificial Intelligence vs Administrative Intelligence?, Munich, GRIN Verlag, https://www.grin.com/document/972247