This paper looks at an algorithmically- led decision process that is designed to select an (almost) perfectly demographically representative cross-section for a Jury Venire using Big Data.
With the scope of Lepri et al. (2017), this paper identifies that "dark sides" such as privacy violations, informational opacity and discrimination are likely to apply to such a (yet) hypothetical Big Data Jury Venire selection process. Answering the question of how this selection process could be positively disrupted is posed, this paper finds that policies akin to Lepri et al. (2017) would address the majority of the problems identified.
Further research will be required to illuminate further potential dark-sides, define more general, positively disruptive policies, as well as to specify policy suggestions.
Table of Contents
1 Introduction
2 Key Concepts
2.1 Big Data
2.2 Algorithm
2.3 Positive Disruption
3 Big Data Jury Venire Selection
3.1 Context and Outline of Hypothetical Jury Venire Selection Algorithm
3.2 Social Good – The “Bright Side”
3.3 A “Dark Side” Diagnosis of Algorithmic Big Data Jury Venire Selection
3.3.1 Computational Violations of Privacy
3.3.2 Information Asymmetry and Lack of Information
3.3.3 Social Exclusion and Discrimination
3.4 Prescription for Positive Disruption
3.5 Evaluation and Summary
4 Conclusion
Research Objectives and Themes
This paper investigates the ethical and legal implications of implementing an algorithmically-led decision process for Jury Venire selection using Big Data. The primary research question addresses how such a selection process can be positively disrupted to mitigate identified "dark sides" while maintaining potential benefits for the judicial system.
- Analysis of algorithmic decision-making in the context of US court jury selection.
- Identification of "dark sides" including privacy violations, information asymmetry, and discrimination.
- Evaluation of policy prescriptions based on user-centric data management and algorithmic transparency.
- Examination of the tension between distributive fairness and procedural fairness.
- Requirement for public explanation and design accountability in algorithmic justice.
Excerpt from the Book
3.3.1 Computational Violations of Privacy
A computational violation of privacy means that inferences about otherwise not disclosed, private information are made, using newly- available behavioural data and computational possibilities (compare Lepri et al. 2017: 9).
Such a computational violation of privacy might occur indirectly as the data would be sourced from external, non- court providers. Further, providing courts with access to (Bright) Big Data makes not only indirect but direct computational violations of privacy a possibility.
So far, courts stuck with “dim” data – that is basic, and often outdated information as described in section 3.1 . This is in line with the jurors’ expectation of being treated as anonymous numbers (Ferguson 2016: 982). Jurors only have to disclose further private information after being summoned in the Voir Dire in order to determine the fit for the case.
On the surface, the juror expectations on the treatment of their privacy seems to be met. However, Voir Dire can be lengthy and invasive, as well as wealthier litigants will invest in investigating about the potential jurors beforehand, for example by googling them or driving by their residence (ibidem: 937, 983, 985- 986).
Summary of Chapters
1 Introduction: Introduces the research topic and defines the research question regarding the positive disruption of Big Data Jury Venire selection.
2 Key Concepts: Clarifies foundational terms including Big Data, the nature of algorithms, and the theoretical framework of positive disruption.
3 Big Data Jury Venire Selection: Outlines the hypothetical selection algorithm and provides a critical analysis of its benefits and "dark sides" regarding privacy, transparency, and discrimination, followed by policy recommendations.
4 Conclusion: Summarizes the findings and emphasizes the necessity for future research into data literacy and systemic algorithmic accountability.
Keywords
Algorithm, Big Data, Positive Disruption, Jury Selection, Privacy, Transparency, Discrimination, Algorithmic Accountability, Data Literacy, Procedural Fairness, Distributive Fairness, Judicial System, Jury Venire, Computational Violation, Data-Driven Decision-Making
Frequently Asked Questions
What is the primary focus of this paper?
The paper examines how a hypothetical, algorithmically-led Big Data system used for Jury Venire selection in US courts can be "positively disrupted" to minimize ethical harms.
What are the core thematic areas?
The central themes include algorithmic accountability, privacy risks, the transparency of automated decision-making processes, and the pursuit of social good in judicial systems.
What is the central research question?
The core research question is: "How can Big Data Jury Venire Selection be positively disrupted?"
Which methodology is employed in this research?
The paper utilizes a normative and analytical approach, building upon the theoretical framework of Lepri et al. (2017) regarding data-driven social good to evaluate the proposed jury selection system.
What topics are discussed in the main section?
The main section covers the context of current jury selection, the potential "bright side" regarding efficiency and representative fairness, and the "dark side" diagnoses including privacy, transparency, and discrimination.
Which keywords best characterize this work?
Key terms include Algorithm, Big Data, Positive Disruption, Jury Selection, Privacy, and Discrimination.
How does the paper propose to handle privacy concerns?
It proposes shifting the court away from being a primary data collector to using secure, third-party platforms that grant access to necessary information without exposing excessive personal data.
Why is "algorithmic illiteracy" considered a problem in this context?
Because it creates a barrier to public understanding of how decisions are made, which increases opacity and undermines trust in the judicial system.
What role does "strict scrutiny" play in the discussion?
It is used to analyze whether the potential for discrimination in the algorithmic selection of jurors would be legally justifiable under existing US constitutional protections.
- Arbeit zitieren
- Maike Heideke (Autor:in), 2019, The Positive Disruption of Big Data Jury Venire Selection, München, GRIN Verlag, https://www.grin.com/document/499629