Grin logo
en de es fr
Shop
GRIN Website
Publish your texts - enjoy our full service for authors
Go to shop › Economics - International Economic Relations

Fiat Iustitia. An Investigation of the Acceptance of Artificial Intelligence in Jurisprudence

Title: Fiat Iustitia. An Investigation of the Acceptance of Artificial Intelligence in Jurisprudence

Master's Thesis , 2024 , 81 Pages , Grade: 2,3

Autor:in: Ronja Boldt (Author)

Economics - International Economic Relations
Excerpt & Details   Look inside the ebook
Summary Excerpt Details

Künstliche Intelligenz im Gerichtssaal – Warum Juristen zögern und wie sich das ändern lässt.

Ob im privaten Alltag, in der Industrie oder in der Medizin – Künstliche Intelligenz (KI) ist längst keine Zukunftsmusik mehr. Doch in einem Bereich scheint der technologische Fortschritt nur zögerlich Einzug zu halten: der juristischen Welt. Warum begegnen Jurist:innen und Fachpersonal der KI mit Skepsis? Und was müsste passieren, damit sich das ändert?

Diese Arbeit geht genau diesen Fragen auf den Grund. Sie bietet einen fundierten Einstieg in das Thema „Künstliche Intelligenz im Recht“, erklärt zentrale Begriffe, beleuchtet Einsatzmöglichkeiten und benennt die Herausforderungen, mit denen Legal Tech – also KI im juristischen Einsatz – konfrontiert ist. Der Blick richtet sich dabei vor allem auf die Menschen hinter der Robe: Was hindert sie daran, digitale Werkzeuge zu nutzen, die Prozesse beschleunigen und den Arbeitsalltag spürbar erleichtern könnten?

Zur Erklärung dieses Widerstands wird das Technology Acceptance Model nach Venkatesh und Bala (2008) herangezogen – ein Modell, das das Nutzerverhalten gegenüber neuer Technologie analysiert und zeigt, wie Wahrnehmung, Nutzen und Erwartungshaltungen Akzeptanz beeinflussen. Doch damit nicht genug: Die Autorin verfolgt einen innovativen Ansatz und kombiniert die Erkenntnisse mit der Nudge-Theorie von Thaler und Sunstein (2008). Deren zentrale These: Menschen lassen sich durch subtile Impulse – sogenannte „Nudges“ – zu gewünschten Entscheidungen lenken, ohne ihnen diese aufzuzwingen.

Im Rahmen einer empirischen Untersuchung wurden Jurist:innen aus verschiedenen Bereichen befragt. Zwei Gruppen wurden dabei unterschiedlich informiert – mit erstaunlichem Ergebnis: Bereits kleine, gezielt platzierte Informationen zur Alltagstauglichkeit von KI konnten kurzfristig die Wahrnehmung positiv beeinflussen. Es zeigte sich ein klarer Zusammenhang zwischen dem wahrgenommenen Nutzen und der Bereitschaft, KI-Systeme zu akzeptieren.

Diese Arbeit ist mehr als nur eine theoretische Abhandlung – sie ist ein praxisrelevanter Impulsgeber für alle, die Digitalisierung im Rechtswesen aktiv gestalten wollen. Sie bietet nicht nur Erklärungsmodelle, sondern konkrete Ansätze, wie durch psychologisches Feingefühl die Tür zur digitalen Transformation auch im konservativ geprägten juristischen Umfeld geöffnet werden kann.

Excerpt

c_1

List of Figures

List of Tables

List of Abbreviations

Zusammenfassung

Abstract

1. Introduction

2. Introduction to Artificial Intelligence

3. Automation in the Legal Field

4. Acceptance theory

5. Nudge Theory

6. Empirical Survey

7. Conclusion

Appendix

Bibliography


List of Figures

 

Figure 1: Theoretical Framework (Graphic based on Venkatesh & Bala, 2008, p. 276)

Figure 2: Court Decision with AI support by gender

Figure 3A: Affinity for Technology at the age of 45 –64 and their support for AI.

Figure 3B: Affinity for Technology at the age of 18 –24 and 35 –44 and their support for AI.

List of Tables

 

Table 1: Determinants of Perceived Usefulness (adopted and adapted from Venkatesh & Bala, 2008, p. 277)

Table 2: Determinants of Perceived Ease of Use (adopted and adapted from Venkatesh & Bala, 2008, p. 279)

Table 3: Nudge Mechanisms (based on Caraban et al., 2019)

Comparison of Non-Obvious Judgments

Table 4.1: Group A (without Feedback)

Comparison Obviousness with AI Support

Table 5.1: Group A (without Feedback)

Table 5.2: Group B (with Feedback)

Table 5.3: Group B (with Feedback)

List of Abbreviations

 

AI                   Artificial Intelligence

 

CANX                        Computer Anxiety

 

CPLAY          Computer Playfulness

 

CSE                Computer Self-Efficacy

 

ENJ                 Perceived Enjoyment

 

HI                   Human Intelligence

 

IT                    Information Technology

 

PU                  Perceived Usefulness

 

PEOU             Perceived Ease of Use :

 

REL                Job Relevance

 

TAM               Technology Acceptance Model

 

TAM 2                        Technology Acceptance Model 2

 

TAM 3                        Technology Acceptance Model 3

Zusammenfassung

 

Diese Arbeit beschäftigt sich mit der Akzeptanz von Künstlicher Intelligenz (KI) im juristischen Bereich. Dabei wird zuerst auf KI eingegangen, was man im Allgemeinen unter KI versteht und welche Formen es gibt. Anschließend wird diskutiert, wie KI im juristischen Bereich, eher bekannt als Legal Technology (Legal Tech), helfen kann und vor welchen Herausforderungen die Einführung von KI – Systemen steht. Ferner werden die Gründe einer auffallend niedrigen Akzeptanz gegenüber KI von Juristen und juristischem Fachpersonal beleuchtet. Um dieser Einstellung entgegenzuwirken, bedarf es dem Leser das Verständnis von verschiedenen theoretischen Modellen. So wird das Technology Accepantance Model nach Venkatesh und Bala (2008) beschrieben, das einen Erklärungsansatz bietet, welches Nutzerverhalten gegenüber neuen Technologien für eine Einführung vorausgesetzt wird. Ziel dieser Arbeit ist es zu untersuchen, ob durch gezielte Anreizsetzung der Blick auf KI von Juristen und juristischem Fachpersonal gelenkt werden kann. Dieser Anreiz soll im Zuge einer Analyse der Nudge – Theorie nach Thaler und Sunstein (2008) erfolgen, die der Auffassung sind, dass das Verhalten von Menschen durch gezielte Gestaltung der Umwelt beeinflusst werden kann. Im Rahmen einer Umfrage wurde das Verhalten und die Einstellung von Juristen und juristischem Fachpersonal aus verschiedenen Fachbereichen untersucht. Es wurde überprüft, ob durch das Aufzeigen positiver Eigenschaften, wie Arbeitserleichterung im juristischen Alltag, die allgemeine Stimmung auf KI positiv bestärkt werden kann. Bei einem Vergleich von zwei Probanden Gruppen wurde deutlich, dass es durchaus möglich ist einen kurzfristigen Anreiz zu setzen, welcher die Probanden ihre eigene Einstellung hinterfragen lässt, die auf interessante Zusammenhänge mit dem wahrgenommenen Nutzen von KI zurückgeführten werden können. Abschließend wird noch ein Ausblick in mögliche weitere Forschungsmöglichkeiten im Bereich der KI in Verbindung mit dem juristischen Bereich gewährt.

Abstract

 

This paper deals with the acceptance of artificial intelligence (AI) in the legal field. It first discusses AI, what is generally understood by AI and what forms it takes. It then discusses how AI can help in the legal field, more commonly known as legal technology (legal tech), and the challenges facing the introduction of AI systems. The reasons for the strikingly low acceptance of AI by lawyers and legal professionals will also be examined. In order to counteract this attitude, the reader needs to understand various theoretical models. For example, the Technology Acceptance Model according to Venkatesh and Bala (2008) is described, which offers an explanatory approach as to which user behavior towards new technologies is assumed for an introduction. The aim of this thesis is to investigate whether the view of lawyers and legal professionals towards AI can be steered by means of targeted incentives. This incentive is to be provided in the course of an analysis of the nudge theory according to Thaler and Sunstein (2008), who are of the opinion that people's behavior can be influenced by targeted design of the environment. A survey was conducted to examine the behavior and attitudes of lawyers and legal professionals from various specialist areas. It was examined whether the general mood towards AI can be positively reinforced by pointing out positive characteristics, such as making work easier in everyday legal work. In a comparison of two groups of test subjects, it became clear that it is quite possible to create a short-term incentive that makes the test subjects question their own attitudes, which can be traced back to interesting correlations with the perceived benefits of AI. Finally, an outlook on possible further research opportunities in the field of AI in connection with the legal field is provided.

1. Introduction

 

“The first step towards change is awareness. The second step is acceptance.”

 

Nathaniel Branden (Diamond, 2022)

 

The legal sector may seem a little antiquated to some people. The thought of the judiciary may conjure up an image of a wrinkled, elderly gentleman in a black and white robe with a judge’s gavel in his hand, looking down on the accused from his dark wooden throne. Or it may conjure the image of a student in a dark library, between towers of books and dusty bookshelves, memorizing legal texts in a pale cone of light.

 

Those images, however, are themselves antiquated and do not reflect court jurisdiction. The law is on the verge of a major transformation. Already celebrated and normalized in many everyday areas, artificial intelligence (AI) is now slowly—but steadily—finding its way into the legal system. Artificial intelligence offers innovative solutions for lawyers, judges, and other legal professionals, which can support their day-to-day work. With new technical possibilities, large amounts of data can be analyzed, legal documents reviewed, and the basis for decisions prepared. However, the introduction of such technology is not always met with enthusiasm. The revolution of automation through AI is viewed rather critically in the legal field.

 

To understand why legal professionals are more likely to accept or reject a technology, acceptance theory offers a valuable approach. It examines which factors determine whether new technologies are seen as useful. To bring legal professionals a little closer to AI, nudge theory offers an interesting approach. Nudge theory is based on the idea that a person’s behavior can be influenced by shaping the environment without imposing prohibitions or commands.

 

The question arises as to whether the targeted demonstration of positive characteristics of AI can overcome the initial skepticism in the conservative legal world. Artificial intelligence can change the efficiency and precision of legal practice to a significant extent, if it is accepted by lawyers.

 

This paper begins with an overview of AI, followed by a discussion of automation in the legal system and reasons for reservations about AI in the legal field. The theoretical foundations—the acceptance model and the nudge theory—on which the survey for this research is based are then discussed. The survey and the results obtained are then presented in detail.

2. Introduction to Artificial Intelligence

 

2.1 Historical Development

 

The definition of AI has changed significantly in relation to the transferring of intelligence to an artificial system. The definition of the term AI is based on the interpretation of the words “artificial” and “intelligence.” An object, property, or behavior is considered artificial if it is “not natural, but reproduced using chemical and technical means, designed, manufactured or created based on a natural model” (Duden editorial team, 2024). The word intelligence comes from the Latin intelligentia, which is translated as “ability to understand, to comprehend” (Langenscheidt, 2024 ). It is understood as the possession or display of “the ability (of humans) to think abstractly and sensibly and to derive purposeful action from it” (Duden editorial team, 2024). This leads to the definition of AI as a form of intelligent action created or simulated, or unnatural by technical means, or as the ability to imitate intelligent behavior. However, what is considered intelligent or artificial depends largely on context and interpretation . New technology is often only referred to as AI until people understand how it works (Boddington, 2017 ).

 

In 1950, Alan Turin developed a procedure for testing intelligence, which compared the intelligence of a computer with that of a human (Copeland & Proudfoot, 2009 ). However, this test quickly fell into disrepute because, in contrast to a computer, human intelligence (HI) depends on interpersonal factors, such as consciousness, emotion, and creativity (Bringsjord, Bello, & Ferrucci, 2003).

 

On the one hand, there is a lack of a time-independent interpretation of technological progress. On the other hand, there is no clear definition of intelligence per se. As technology advances, perceived intelligence increasingly approaches HI (Bostrom & Strasser, 2014).

 

The boom in AI research since 2010 is due to three factors:

 

·         the availability of large amounts of data that form the basis for cognitive processes;

·         the further development of algorithms, such as machine learning;

·         increased computing power, according to Moore’s Law (Peter & Gustav, 1999).

 

These factors have made it possible to put earlier research concepts into practice and make AI more widely accessible (Holdren et al., 2016).

 

2.2 Comparing Intelligence Level with Human Intelligence

 

The automation of intelligent behavior and the replication of HI aim to artificially replicate or surpass the human level of intelligence. Research has focused on the intellectual expansion and replication of humans, on robotics, embodied cognition, and emulating the human shape and form (Froese & Ziemke, 2009). Whether an algorithm is considered intelligent depends on the distinction between weak and strong AI. A weak AI has human-like characteristics in that it is capable of learning. To solve problems, the algorithm learns, makes continuous adjustments, and draws its own conclusions. Strong AI distinguishes itself from weak AI in that, in addition to intellectual and reason-based dispositions, it is able to emulate cognitive characteristics that define humans, such as emotions, intellect, and creativity (Kramer, 2009). Research is investigating how close the actions and thought processes of an AI system can come to human patterns. Originally, the focus was on systems that act like humans. The focus of today’s systems is rational logic and autonomous decision-making to achieve optimal solutions to specific problems (Kruse et al., 2015; Turing, 1950; van der Hoek, 2003).

 

Different forms of AI can be distinguished in a variety of ways. For example, one can differentiate based on the level of intelligence of the software compared to HI, the degree of human imitation, and the visibility of the AI for the end user(Goertzel, 2007).

 

Narrow AI       When AI is used for specific, clearly defined use cases that do not surpass human intelligence in those areas (AI < HI) (Goertzel, 2007).

 

General AI      An AI that reaches the intelligence level of a single human because it is capable in many areas but is not more intelligent than a group of humans (AI = HI) (Goertzel, 2007).

 

Super AI          An AI surpasses the intelligence level of individuals or a community (AI > HI) (Goertzel, 2007).

 

2.3 Computational Learning

 

Computational learning is a subfield of AI and deals with the modeling and analysis of algorithms—the machine learning algorithms. Systems based on these algorithms are self-learning systems. They recognize certain regularities and draw their conclusions from them. This all happens in the image of humans and the animal kingdom (Bendel, 2019). The various forms of computational learning are discussed below.

 

Machine learning

 

Machine learning is a data modeling technique that uses training data to create a model that is a specific and limited abstraction of that data (Blum & Langley, 1997). The term learning refers to the fact that the technology analyzes the data and develops the model independently, without human intervention. This learning process consists of the model using the training data to create an abstraction model for solving a problem.

 

The resulting model can be considered a hypothesis to solve a problem (Kim, 2017). After the model is created, it is applied to real data, feeding the input data into the model and generating an output. This process is called inference (Kim, 2017). While the term AI covers a broad spectrum of technologies that exhibit some form of intelligence, machine learning is a specific subfield (Kim, 2017; Mohri, Rostamizadeh, & Talwalkar, 2012). Machine learning is used when intelligence is required, but physical laws and mathematical equations are not sufficiently precise (Kim, 2017).

 

Supervised learning

 

In supervised learning, the system is trained, using instructions and guidance from a human. The information and instructions are fed in, using predefined parameters. The system generates an output based on this information. This procedure is gradually expanded through controlled interventions by a trainer (Zhu & Goldberg, 2009).

 

This form of computational learning creates a semantic network that, after several training runs, develops the ability to form connections. The trained system can then independently analyze new input data by comparing learned associations and providing probabilistic results (Jordan, 1992).

 

Unsupervised learning

 

In unsupervised learning, as the name suggests, the algorithm is trained without human guidance. The algorithm independently creates an abstraction from the given input data and provides a description for predicting new values (Hastie, Tibshirani, & Friedman, 2009).

 

Deep learning

 

Deep learning is a special form of computational learning. It is based on artificial neural networks and includes several partially opaque layers. A system that builds on deep learning learns largely autonomously by optimizing weighting parameters to achieve better results. The actual learning process is carried out by the algorithm itself. However, one problem with deep learning is the lack of traceability of the network structure in the hidden layers, which requires trust in the results (Kim, 2017; Lillicrap et al., 2015; Zhang, Bengio, Hardt, Recht, & Vinyals, 2016).

3. Automation in the Legal Field

 

Legal tech provides an automation option in which intelligent technologies can replace and also support the manual elements of legal activities. In this article, we will not only discuss the difficulties of legal automation, but also highlight the enormous opportunities. Our focus here is on how legal automation can increase job satisfaction in legal teams while improving efficiency and accuracy. Find out how legal automation has fundamentally changed the way legal issues are approached.

 

3.1 Historical Background

 

3.1.1 Legal Tech - A new idea

 

Legal tech—short for legal technology—is located in information technology (IT ), which deals with the automation of legal activities. The aim of this technology is to increase the efficiency of legal work. Benefiting from the immense potential of IT, the importance of legal technology for the legal system has been steadily increasing for several years (Vogl, 2016 ).

 

There are many definitions of “legal tech.” It is not a recent term, but its use is a recently increasing trend. The meaning of the term has been in constant change for decades. The use of technology in legal applications is experiencing a renaissance.

 

As early as the end of the 1940s, the American lawyer Lee Loevinger described technical prerequisites as “thinking machines” (1949). However, Loevinger identified a major challenge in the actual implementation—the legal language. With its often vague legal terms, translation into variables was not successful. The thinking machine was not able to find a solution for vaguely formulated legal principles and legal subtleties (Loevinger, 1949).

 

When there are innovations that could bring down an old, functioning system, this innovation sparks a discussion of pros and cons, opportunities and risks, supporters and opponents. Legal tech is sparking discussion in the legal field. Although the connection between technology and law is no longer a new topic, differences can already be seen between the past and the present. Technology was discussed in terms of making individual work steps easier, but the core of the current discussion is that entire work processes are based on algorithms (Fiedler, 1987).

 

3.1.2 Starting signal for a new era

 

The connection between law and computer science was already being explored in 1970. Wilhelm Steinmüller published the book EDV und Recht: Einführung in die Rechtsinformatik (EDP and Law: Introduction to Legal Informatics) and laid the foundation for legal informatics (Steinmüller, 1970). Herbert Fiedler also coined the term “legal informatics” and published a five-part series on the subject (Fiedler, 1970).

 

The Legal Information System for the Federal Republic of Germany (originally: JURIS; today: juris) was established in 1985 by the Federal Ministry of Justice and the Society for Mathematics and Data Processing. The Center for Economic and International Research was founded. It served as an information source for public bodies. Today, juris is a digital provider of legal and practical knowledge management in Germany (Juris).

 

In addition to some euphoria, there were critical voices. For example, Niklas Luhmann put forward the bold thesis that “law and data processing have as much to do with each other as cars and deer: mostly nothing at all, only sometimes they collide” (Grupp, AnwBl. 2014, 660; Konzelmann/ Neuhorst, JurPC WebDok. 110/2003, para. 3 6.). However, if one considers the multitude of ways in which automation has made work easier in the legal field, this idea can now be refuted (Graichen, 2022). Due to the technological progress that is creeping into many areas of everyday life, the legal field is also experiencing an upswing in the debate about the connection between computer science and law and about the use of AI in the legal system.

 

3.2 Why Legal Tech

 

The instinctive reaction of lawyers to technical innovations in the justice system is a defensive reaction. It is necessary to ask why this is the initial reaction when the legal area has been in constant change for centuries (Graichen, 2022). The necessity and purpose of digitization in the German justice system needs to be considered. Particular emphasis can be placed on the introduction of automation processes. Technological innovations should not be introduced for their own sake but rather to support legal processes and the justice system. If the focus of the technology is on making the work of judges easier, speeding up the process, or providing access to new sources of information, the introduction of automation makes sense (Graichen, 2022). Sensible innovations can make it easier for citizens to access the courts and increase their understanding of court proceedings. It is important to take into account the quality requirements of a functioning justice system.

 

3.2.1 Acceleration

 

“Justice delayed is justice denied.”

 

William Ewart Gladstone (Şirin, 2019)

 

William Ewart Gladstone was a British politician, four times prime minister, and is considered the most important statesman of the Victorian era (History Of William Ewart Gladstone, GOV.UK, o. D.). He emphasizes the urgency of faster procedures and that delayed justice is tantamount to denied justice. The efficiency of the judiciary and the enforcement of the right to a guaranteed justice system are of importance for modern and efficient procedural law (Gaier, 2018).

 

According to surveys by the Roland Legal Report, the majority of German citizens feel that court proceedings are too long. This perception has increased in recent years. Despite a decline in the number of pending cases in Germany, cases often take a long time to process, especially before higher courts (Wagner, 2017).

 

As an example, the court proceedings against the former manager of Hypo Real Estate, Georg Funke, who was accused of embellishing balance sheet figures, were discontinued on September 29, 2017, as the clarification of allegations within the 10-year statute of limitations in this case seemed questionable (Graichen, 2022). Was this delayed justice or tantamount to denied justice? How is it possible for a procedure to be delayed for so long?

 

The reason for the long duration of proceedings is the growing workload, which arises, for example, from extensive class action lawsuits or from numerous asylum procedures, to name just two examples (Molavi & Erbguth, 2019). Considering the impending wave of judge and other legal staff retirements, this problem is not likely to be remedied by simply increasing the number of staff (Molavi & Erbguth, 2019).

 

One approach to tackling these problems is the increased use of automation and legal tech. Such technologies can speed up processes, for example, by searching contracts and files for relevant information (Graichen, 2022). Administration is already working in an automated manner, which not only speeds up procedures but also reduces costs (Bundesregierung, 2016). If such technology is applied to court proceedings, this could result in a significant relief for judges. Machines can collect relevant case information. This could significantly shorten the duration of proceedings. The risk of statutes of limitations expiring or of compensation payments due to excessively long proceedings can be minimized (Krimphove & Niehaus, 2018).

 

While a quick decision is an important criterion for good jurisprudence, the quality of the decision must not be ignored. If one considers the minimum duration of a procedure, this is just as important for decision-making to give the parties a sufficient legal hearing. A certain amount of time is necessary for the parties to transform a personal conflict into a legal one and accept a judgment. If the procedure is too accelerated, it could endanger legal peace, as the parties need time to deal with the consequences of a procedure. There is also the possibility that the parties can come to a joint decision and find an amicable solution after a certain amount of time. Automated systems lack empathy and understanding for the human aspects of a case. However, these factors are of great importance in jurisprudence (Rafi, 2004).

 

Due to the challenges and impact of delays in court proceedings, automation in the justice system offers opportunities to increase efficiency. However, there is also a risk of shortening the time needed for a fair conflict resolution and neglecting the human aspect of jurisprudence.

 

3.2.2 Faultless

 

For a judgment to be perceived as fair, it must be considered to be free of errors. To counteract injustice, the quality of justice is essential with regard to the accuracy of judgments.

 

It is necessary to consider how important the correctness of judgments is, and whether taking any measures is justified to ensure a fair verdict. To guarantee legal certainty and procedural justice, court judgments and the true legal basis may diverge. However, this phenomenon is accepted to place justice at the top of the judicial system (Rollberg & Universität Würzburg, 2020). This view is reflected in various legal principles. For example, civil law regulates formal and material legal force and prevents a dispute from being tried multiple times. In criminal law, an offender is protected from further prosecution by the use of criminal charges (Rollberg & Universität Würzburg, 2020/2020 ).

 

Artificial intelligence appears to be free of errors. It is based on pre-programmed algorithms and calculations and can put together factual contexts. But this image of the supposedly error-free AI does not correspond to reality. Despite theoretical superiority, errors in programming cannot be ruled out. There are various reasons for possible errors. They can arise from incorrect training data, but also from technical problems. These potential errors can have serious consequences, since a much larger group would be affected by the errors if an AI were used. With a human judge, only individual cases are affected (La Diega, 2018).

 

3.2.3 Simplification

 

Automation in the legal field offers the possibility of automatically reviewing smaller legal cases. This is already being used in Germany, for example, in mass proceedings, such as passenger rights or tenancy law and traffic accidents. These programs allow a potential plaintiff to assess his or her chances of success in the event of a lawsuit. The burden on the judiciary is reduced, legal proceedings are accelerated, and the susceptibility to errors is reduced. One example of how the burden on the judiciary is reduced is the electronic dunning process, according to Sections 688 ff. of the Code of Civil Procedure (ZPO). Here, dunning applications must be submitted in a machine-readable manner. Processing is limited to checking formal requirements without checking the content of the claim (“Entwurf Eines Zweiten Gesetzes Zur Modernisierung der Justiz (2. Justizmodernisierungsgesetz)”, 2006).

 

Automation in administration makes work easier. The legislator enables fully automated administrative procedures through the following laws:

 

·         §§ 35a of the Administrative Procedure Act (VwVfG)

·         31a of the Social Code X (SGB X)

·         Section 155 paragraph 4 of the Fiscal Code of Germany (AO).

 

In contrast to the reminder procedure, at least superficial checks of the requirements for sovereign measures are carried out (Graichen, 2022).

 

3.2.4 Access to legal protection

 

One of the cornerstones of a functioning constitutional state is the guarantee of legal protection (Schulze-Fielitz, 2004). According to Art. 19 Paragraph 4 of the Basic Law for the Federal Republic of Germany (GG), everyone has access to legal recourse. However, many citizens are reluctant to assert their rights due to the high or unforeseeable costs. This is demonstrated by the Forsa study from 2013 (Rollberg & Universität Würzburg, 2020). The use of legal tech can provide a remedy and improve access to justice (Wagner, 2017).

 

If a plaintiff sees a small individual claim in a lawsuit but a large overall loss, as can be observed, for example, in cases of antitrust violations or small claims based on unlawful general terms and conditions, many people shy away from filing a lawsuit. They therefore refrain from asserting their claims (Rollberg & Universität Würzburg, 2020). Artificial intelligence can help in such situations by facilitating access to the law and minimizing the risks of excessive costs. Various platforms support consumers in enforcing their rights. If an individual seeks legal advice, these services can cost several hundred euros. Automated platforms are free of charge. Costs only arise when the software confirms a high probability of success (Steinrötter, 2018). Automation processes therefore make a significant contribution to more effective legal protection for consumers. They support plaintiffs in the legal process even before a legal dispute arises.

 

3.2.5 Objectivity of decision-making and transparency

 

Through the use of AI, expectation is growing that automation can increase the equality of the application of the law (Rademacher, 2017). This hope arises from the public’s criticism of inconsistent jurisprudence, although there is no evidence for this (Ogorek, 2004).

 

Judges are seen as the epitome of objectivity and neutrality but they are not immune to unconscious prejudices. They can allow themselves to be unconsciously guided by prejudices when making decisions (Jäger, 2018). A study has found that, in addition to factors such as gender and social background, the judge’s mood of the day influences the judge’s decision-making. Even a judge’s hunger can have an influence on decisions. Judges are more likely to decide in favor of defendants after a meal (Danzinger et al., 2011).

 

Artificial intelligence may appear to be a solution to unconscious prejudices because it is supposedly free of subjective influences. It is possible that AI systems would be able to recognize human biases and alert the users. This could lead to a more uniform and fairer decision-making practice since such systems do not know personal experiences or daily moods and can work constantly (Otto, 2019).

 

On the other hand, the question arises as to whether the use of AI would actually lead to fairer decisions. After all, programs are developed by people. Their values and prejudices can thus unconsciously flow into the AI during programming. Consequently, the decision is simply transferred from the judge to the program and applied to the general public (Gless & Wohler, 2019). Whether technological progress can bring an advantage in terms of objectivity can only become clear in the future.

4. Acceptance theory

 

In addition to examining the technology itself, it is important to consider factors that influence consumers’ attitudes toward material objects, the active use of the technology, and its accessibility. Acceptance theory from behavioral economics offers explanatory approaches that make it possible to examine, test, and explain these attitudes of users.

 

4.1 Research

 

Acceptance theory is based on the idea that the acceptance of new technologies or products depends on the perceived benefit and its evaluation. The theory offers an approach to examining the factors and influences that shape user behavior. Aspects such as user-friendliness, perceived benefit, social norms, and personal experiences of consumers are taken into account (Alle Aktien, 2024).

 

4.2 Expression

 

“Acceptance refers to the active willingness to accept, voluntarily accept, acknowledge, approve or agree with someone or something” (Scheuer, 2020). The term is derived from the verb accept, which comes from the Latin word accipere (accept, approve). The premise of acceptance is the active consent of decisions, behavior, or certain conditions. It is important to emphasize that this is an active consent and that it is not a subconscious process (Schubert & Klein, 2020). Acceptance is explained in Brockhaus as: “the affirmative or tolerant attitude of persons or groups toward normative principles or regulations, in the material area toward the development and dissemination of new technologies or consumer products” (2006). It requires active consent to the decisions or behavior of another person or group or the conscious acceptance of given social, economic, or political conditions (Schubert & Klein, 2020).

 

The acceptance object is the material or non-material object that is to be accepted or rejected. The acceptance subject is, therefore, the person who accepts or rejects this object. And this happens in a specific context. The relationship between the acceptance object and the acceptance subject varies depending on the acceptance context and is influenced by it.

 

According to Hayes (2001), acceptance occurs “when

 

1.                  Something is accepted willingly or with consent

2.                  Something is considered sufficient or adequate

3.                  Something is undertaken independently [or]

4.                  Something is accepted with favor.” (Klosa, 2016a)

 

4.3 Technology Acceptance Model

 

The Technology Acceptance Model (TAM) is a theoretical model that explains the behavior of users in relation to the acceptance of new technologies. It assumes that Perceived Usefulness and Perceived Ease of Use are decisive factors for their acceptance.

 

Description: A diagram of a business process

Description automatically generated

 

Figure 1: Theoretical Framework (Graphic based on Venkatesh & Bala, 2008, p. 276)

 

4.3.1 The development of the Technology Acceptance Model

 

The technology acceptance model (TAM) is based on the model developed by Davis in 1989 and is used to forecast the individual assumption and use of new information technologies. The acceptance model describes the intention to use an information technology and assumes that this attitude toward technology depends on two determinants: perceived ease of use and perceived usefulness (Davis, 1989). The aim of this model is to predict how high the acceptance of a product is, especially in a business context, and to use this to identify potential design problems (Mohd et al., 2011).

 

“Perceived usefulness: The extent to which a person believes that using an IT will enhance his or her job performance.

 

Perceived ease of use: The degree to which a person believes that using an IT will be free of effort.” (Venkatesh & Bala, 2008, p. 275).

 

The TAM assumes that the two determinants, perceived usefulness and perceived ease of use, can be considered independently and that development criteria are transferred from the behavioral intention and influence the determinants.

Excerpt out of 81 pages  - scroll top

Details

Title
Fiat Iustitia. An Investigation of the Acceptance of Artificial Intelligence in Jurisprudence
College
University of Würzburg  (Chair of Applied Microeconomics, esp. Human-Machine Interaction)
Course
Master-Thesis VWL
Grade
2,3
Author
Ronja Boldt (Author)
Publication Year
2024
Pages
81
Catalog Number
V1592537
ISBN (eBook)
9783389134733
ISBN (Book)
9783389134740
Language
English
Tags
Künstliche Intelligenz im Recht Künstliche Intelligenz Artificial Intelligence AI KI Jurisprudence Acceptance of AI Acceptance of Artificial Intelligence Legal Tech Human Machine Interaction Computational Learning Maschinelles Lernen Acceptance theory Technology Acceptance Model TAM TAM3 Psychological Acceptance Interpersonal Acceptance Rejection Theory Anthropomorphism Dual Process Theory Nudge Theory Behavioral Economics Libertarian Theory Paternalism Decision Architecture Libertarian Pternalism Nudging Digital Nudging Nudging Mechanism Computer Anxiety Computer Playfulness Computer Self Efficacy Perceived Enjoyment Human Intelligence Information Technology Perceived Usefullness Perceived Ease of Use Job Relevance Akzeptanztheorie
Product Safety
GRIN Publishing GmbH
Quote paper
Ronja Boldt (Author), 2024, Fiat Iustitia. An Investigation of the Acceptance of Artificial Intelligence in Jurisprudence, Munich, GRIN Verlag, https://www.grin.com/document/1592537
Look inside the ebook
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
Excerpt from  81  pages
Grin logo
  • Grin.com
  • Payment & Shipping
  • Contact
  • Privacy
  • Terms
  • Imprint