The rapid integration of artificial intelligence (AI) into communication processes has transformed how information is disseminated and received, yet it raises significant ethical concerns surrounding bias, transparency, and accountability. This review paper explores these critical dimensions of AI ethics, aiming to highlight their implications for public trust and the integrity of communication systems. We define ethical AI and examine the challenges of implementing these principles within AI-driven communication frameworks. A detailed analysis of biases, including algorithmic, data, and cultural biases, illustrates their detrimental effects on message delivery and audience engagement. Furthermore, the paper delves into the necessity of transparency, emphasizing the importance of explainability in AI operations and the challenges posed by proprietary technologies. The paper also address accountability, identifying the roles of stakeholders and the need for robust regulatory frameworks. This review examines how bias, transparency, and accountability intersect, revealing systemic challenges and offering novel recommendations for improving ethical AI practices. Ultimately, the paper advocates for a proactive approach to ethical dialogue, highlighting the need for continuous research and discussion as AI communication technologies evolve.
ETHICS OF AI IN COMMUNICATION: A REVIEW OF BIAS, TRANSPARENCY, AND ACCOUNTABILITY
*Anyanwu, L.
Department of International Communication Management, The Hague University of Applied Sciences, Netherlands.
ABSTRACT
The rapid integration of artificial intelligence (AI) into communication processes has transformed how information is disseminated and received, yet it raises significant ethical concerns surrounding bias, transparency, and accountability. This review paper explores these critical dimensions of AI ethics, aiming to highlight their implications for public trust and the integrity of communication systems. We define ethical AI and examine the challenges of implementing these principles within AI-driven communication frameworks. A detailed analysis of biases, including algorithmic, data, and cultural biases, illustrates their detrimental effects on message delivery and audience engagement. Furthermore, the paper delves into the necessity of transparency, emphasizing the importance of explainability in AI operations and the challenges posed by proprietary technologies. The paper also address accountability, identifying the roles of stakeholders and the need for robust regulatory frameworks. This review examines how bias, transparency, and accountability intersect, revealing systemic challenges and offering novel recommendations for improving ethical AI practices. Ultimately, the paper advocates for a proactive approach to ethical dialogue, highlighting the need for continuous research and discussion as AI communication technologies evolve.
Keywords: Artificial Intelligence, Ethics, Communication, Bias, Transparency
1.0 INTRODUCTION
Artificial intelligence (AI) has transformed the communication landscape with tools that enable personalized messaging, content moderation, automated responses, and audience segmentation. Advancements in natural language processing (NLP) and machine learning (ML) allow AI systems to analyze vast data, delivering content accurately and responding dynamically to user input (Kaplan & Haenlein, 2019). This shift has enhanced efficiency and targeting across sectors like media, customer service, social media, and marketing (Brennen, 2020). For instance, AI-driven chatbots improve response times and customer satisfaction in service settings (Adamopoulou & Moussiades, 2020). Also, content recommendation algorithms on social media and streaming platforms tailor suggestions to user preferences, boosting engagement (Van Dijck et al., 2018). However, the rise of AI in communication brings ethical concerns. AI systems often propagate biases, lack transparency, and have insufficient accountability mechanisms, threatening public trust (Floridi & Cowls, 2019). Issues with biased language models and opaque algorithms illustrate the potential harm of neglecting ethical considerations (Bender et al., 2021). Addressing these ethical dimensions is vital for maximizing AI's benefits and mitigating risks (Crawford, 2021).
The significance of ethics in AI-driven communication is heightened as these systems influence social interactions, public opinion, and personal choices (O’Neil, 2016). Without ethical guidelines, AI risks undermining trust due to algorithmic bias, lack of transparency, and poor accountability (Whittaker et al., 2018). Algorithmic bias can inadvertently reinforce stereotypes or marginalize groups, evident in biased content moderation on social media (Noble, 2018). Transparency is critical; AI systems often act as "black boxes," obscuring decision-making processes and fostering misinformation (Pasquale, 2015). Moreover, accountability is complex, raising questions about responsibility for autonomous systems' actions (Rahwan et al., 2019). Given these ethical challenges, it is essential to examine AI ethics in communication. Ethical AI practices can help foster trust, prevent harm, and ensure that technologies serve the public good (Whittaker et al., 2018). Addressing issues of bias, transparency, and accountability allows ethical frameworks to steer the careful deployment of AI in communication systems, striking a balance between innovative practices and ethical duties (Floridi, 2018).
This paper aims to review the ethical considerations surrounding AI in communication, focusing on bias, transparency, and accountability. Through literature examination, real-world case studies, and regulatory efforts, this review highlights challenges and potential solutions related to ethical AI practices. It will explore the sources and consequences of algorithmic bias, investigate the need for transparency, and discuss accountability mechanisms in AI deployment (Binns, 2018). The primary objective is to provide a framework for addressing ethical concerns in AI communication to ensure responsible and equitable technology use. As AI evolves, embedding ethical principles in communication technologies is vital for fostering a just and trustworthy digital environment (Eubanks, 2018).
2.0 Ethics of AI in Communication
2.1 Defining Ethical AI
Ethical AI refers to a framework for developing artificial intelligence that aligns with fundamental human values such as fairness, accountability, transparency, and responsibility. As AI increasingly mediates communication, ethical AI is vital for guiding its actions and decisions. Fairness aims to minimize biases in AI algorithms to ensure equitable treatment for all users (Binns, 2018). Bias can result in discrimination, marginalizing specific groups based on socio-demographic or racial factors (Noble, 2018). Therefore, ethical AI frameworks emphasize creating algorithms that do not disadvantage particular groups (Crawford, 2021).
Transparency involves making AI processes understandable and explainable, especially when AI delivers information to the public. This fosters trust by clarifying decision-making processes (Pasquale, 2015). Accountability focuses on identifying responsibility for the actions and potential harms caused by AI systems (Floridi & Cowls, 2019). Finally, responsibility entails proactively identifying and mitigating risks in AI deployment, ensuring practices align with social and moral standards (Rahwan et al., 2019). These principles aim to build trust in AI systems and ensure technology serves the public good (Whittaker et al., 2018).
2.2 Challenges of Ethical Implementation
Despite clear ethical principles, integrating them into practice presents significant challenges. Algorithmic bias remains a primary obstacle, with AI systems inheriting or amplifying biases from training data, leading to discriminatory outcomes. Noble (2018) shows how search engine algorithms perpetuate racial stereotypes, while Bender et al., (2021) highlight that models like OpenAI’s GPT-3 reproduce existing biases, potentially causing harm in communication contexts.
Achieving transparency is difficult due to the complex nature of many AI systems. The "black box" issue complicates users' understanding of decision-making, leading to mistrust (Pasquale, 2015). This lack of transparency is especially problematic in automated content moderation, where users may not know why posts are flagged (Van Dijck et al., 2018). Explaining AI decisions is challenging due to the proprietary nature of algorithms and complex machine learning processes (Burrell, 2016).
Accountability is also complex, as it can be difficult to assign responsibility for AI outcomes. Diakopoulos (2016) discusses challenges in identifying accountability in automated news generation, where inaccuracies may lead to unclear blame among developers, organizations, or the AI itself (Rahwan et al., 2019). The absence of standardized regulations further complicates this issue, as companies often self-regulate their ethical AI practices (Whittaker et al., 2018).
Finally, implementing ethical frameworks involves technological and operational challenges. Ethical considerations, such as bias testing and transparency protocols, require substantial resource investment, which some companies view as a barrier to AI adoption (Mittelstadt et al., 2016). Also, the evolving nature of AI ethics, without universally accepted standards, leaves developers to interpret guidelines independently (Floridi, 2018).
3.0 Bias in AI-Driven Communication
3.1 Types of Bias
In AI-driven communication, bias can arise from algorithmic, data, and cultural sources. Algorithmic bias originates from the design of the algorithms themselves, potentially reinforcing societal biases. Binns (2018) notes that machine learning algorithms, relying on statistical correlations, may produce prejudiced outputs based on patterns in training data, leading to skewed decision-making and reinforcing stereotypes, particularly in diverse communication platforms (Noble, 2018).
Data bias stems from the datasets used to train AI models. If training data is biased, overrepresenting certain groups or excluding minority perspectives, the AI will perpetuate these biases. Gebru et al., (2021) indicate that large language models often reflect and propagate societal biases. For example, image recognition systems trained on datasets favoring lighter-skinned individuals show lower accuracy for darker-skinned subjects (Buolamwini & Gebru, 2018).
Cultural bias occurs when AI systems trained on specific cultural data struggle to adapt to other norms. This bias can marginalize non-Western perspectives in global applications, as seen in AI-driven communication tools that may misinterpret or censor non-Western content (Schwartz et al., 2019).
3.2 Impact on Communication
Biases in AI-driven communication systems can significantly affect how information is disseminated and perceived. When used for message delivery or content recommendation, AI biases may skew public perception by prioritizing or suppressing certain messages. Zhao et al., (2017) illustrate this with image-captioning algorithms that reinforce gender stereotypes, influencing how demographic groups view themselves and others.
Content moderation is another area vulnerable to bias, where algorithms may unjustly flag certain cultural expressions, leading to biased censorship. Crawford and Paglen (2019) note that marginalized communities may use culturally specific language misinterpreted as offensive, resulting in uneven treatment and harm to free expression (Raji et al., 2020).
Target audience segmentation is similarly impacted, where biased algorithms may exclude specific demographics from receiving certain information or ads. Ali et al., (2019) highlight that AI-driven ad delivery systems can inadvertently target or exclude users based on race or gender, reinforcing social inequalities and limiting exposure to diverse information (Speicher et al., 2018).
3.3 Case Studies
Several cases illustrate the real-world impacts of bias in AI communication. For instance, OpenAI's GPT-3 has been shown to produce outputs reinforcing racial and gender stereotypes, raising concerns for applications like customer service and content generation (Bender et al., 2021). Facebook’s content moderation system has faced criticism for discriminatory practices, with Eslami et al., (2018) finding that automated tools disproportionately flagged posts from Black and Muslim users, leading to distrust. This highlights the challenge of designing unbiased moderation algorithms. Google’s search algorithm exemplifies how data and algorithmic bias affect communication. Noble (2018) shows that certain search terms yield results reinforcing stereotypes, shaping public perception and providing a distorted view of reality. Given the importance of search engines in information access, these biases can have significant societal implications.
4.0 Transparency in AI Communication
4.1 Defining Transparency
Transparency in AI communication refers to how well AI systems and their processes can be understood by users and stakeholders. Explainable AI (XAI) is vital for this transparency, as it helps users grasp the reasoning behind AI-driven decisions. Adadi and Berrada (2018) define explainable AI as models and methods that enable users to trace decision-making processes, fostering accountability and trust in AI applications. This is especially important in sectors like media, healthcare, and finance, where transparency affects fairness and manipulation concerns (Lipton, 2018).
Open-source models enhance transparency by allowing external scrutiny and collaborative improvements. Zednik (2019) states that open-source systems empower stakeholders to identify biases and vulnerabilities, though they still require efforts to ensure public comprehension (Guidotti et al., 2019). Clarity in how data is processed and algorithms function is essential for user understanding and trust. Ribeiro et al., (2016) advocate for interpretable models and visualization tools to help users understand complex AI processes, particularly in content recommendations or moderation.
4.2 Challenges to Transparency
Achieving transparency in AI is complex, particularly with advanced models like deep learning that often operate as "black boxes." Doshi-Velez and Kim (2017) note that these models, while effective, are difficult to interpret due to numerous interconnected variables, complicating public explanations and potentially eroding trust in content-driven AI systems.
Proprietary models further complicate transparency, as companies may limit access to protect intellectual property, hindering external audits and public confidence. Binns (2018) emphasizes the problems arising when companies withhold model details under the guise of competitive confidentiality, making it hard for users to assess the fairness of AI outcomes. Explaining AI decisions to non-expert users presents additional communication barriers. Miller (2019) discusses the challenge of translating technical AI logic into understandable language, which can foster skepticism among the public, especially in AI communication where trust is crucial.
4.3 Best Practices
To enhance transparency in AI communication, best practices include model interpretability, clear disclosures, and obtaining user consent:
- Model Interpretability: Designing interpretable models helps users understand outcome factors. Techniques like Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) offer insights into how variables influence predictions. Ribeiro et al., (2016) assert that interpretability is crucial for transparency, aiding user comprehension of AI-generated content.
- Clear Disclosures: Transparency involves clear communication about AI usage, data sources, and limitations. Self-disclosure practices help set realistic user expectations. Rudin (2019) recommends disclosing model accuracy, biases, and uncertainties to prevent misinterpretations, particularly relevant in content moderation.
- User Consent: Obtaining informed user consent is essential for ethical transparency. Users should be informed about AI's purpose, data collection, and usage, maintaining privacy and trust, especially in social media. Floridi and Cowls (2019) emphasize that user consent should be integral to AI deployment, ensuring awareness of the system's goals and limitations.
5.0 Accountability in AI Systems for Communication
5.1 Roles and Responsibilities
As AI systems become integral to communication, accountability issues arise concerning who is responsible for unintended harm or failures. Wachter, Mittelstadt, and Floridi (2017) assert that both developers and organizations have interconnected responsibilities in ensuring AI systems adhere to ethical standards, particularly concerning biased content moderation, misinformation, and privacy breaches. Both parties must maintain transparency, fairness, and accuracy in AI applications.
Developers are essential in incorporating ethical considerations into AI design, selecting training data, algorithms, and bias safeguards. Cath et al., (2018) emphasize that developers must anticipate ethical challenges, as their design choices impact user experiences. Organizations also share accountability, tasked with creating clear AI policies, providing user support, and regularly auditing AI performance to mitigate risks.
Organizational accountability is crucial for managing AI risks, especially when systems affect public discourse and individual rights. Jobin, Ienca, and Vayena (2019) highlight the need for robust accountability frameworks to monitor AI’s impact and address unintended consequences. This shared responsibility ensures ethical practices throughout AI deployment, preventing harm and fostering user trust.
5.2 Policy and Regulatory Frameworks
The demand for regulatory frameworks on AI accountability has led to global policies and guidelines. The European Union’s General Data Protection Regulation (GDPR) exemplifies this, requiring organizations to demonstrate transparency and responsibility in data handling, impacting AI systems reliant on user data (Veale, Binns, & Edwards, 2018). The GDPR’s “right to explanation” allows users to request information on AI decisions affecting them, enhancing accountability.
Emerging policies like the EU’s Artificial Intelligence Act focus on high-risk AI applications, mandating standards for transparency and user rights (European Commission, 2021). The U.S. National Institute of Standards and Technology (NIST) has also published guidelines for AI risk management, emphasizing organizational responsibility (National Institute of Standards and Technology, 2023). These measures reflect a growing recognition of AI accountability, aiming to protect users and promote responsible development.
Governments are forming committees to tackle accountability challenges and create ethical guidelines. Fjeld et al., (2020) note that frameworks like the OECD’s AI Principles encourage policies promoting transparency, fairness, and accountability, emphasizing the need for organizations to be accountable for AI’s ethical impacts on society. These principles offer foundational guidelines adaptable to various regulatory landscapes, promoting cross-national standards for AI accountability.
5.3 Examples of Accountability Measures
In response to accountability demands, several industry measures have been introduced to ensure ethical AI in communication. Audits and impact assessments are common methods for identifying AI-related risks and evaluating ethical implications. Raji et al., (2020) advocate for regular AI audits to assess model accuracy, bias, and transparency, enabling developers and organizations to detect and address issues early.
Algorithmic Impact Assessments (AIA) are another tool for fostering accountability, requiring organizations to evaluate potential social, economic, and ethical effects before deploying AI in communication. Whittaker et al., (2018) argue that impact assessments encourage organizations to consider broader AI consequences on user experience, fairness, and privacy, aligning AI deployment with ethical norms.
Additionally, industry-led initiatives like the Partnership on AI and the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems are establishing best practices for AI accountability. These bodies provide resources and guidelines for ethical AI, aiding organizations in developing transparent and accountable communication systems. Binns (2018) emphasizes that these efforts illustrate a commitment to responsible AI, with companies voluntarily adopting ethical guidelines to maintain public trust.
6.0 Intersections of Bias, Transparency, and Accountability
6.1 Linkages and Overlaps
Bias, transparency, and accountability are interrelated ethical concerns in AI systems that influence each other in complex ways. Addressing bias often necessitates transparent practices, such as openly sharing data sources, model architecture, and evaluation metrics, which enable scrutiny of potential algorithmic biases. Binns (2018) asserts that transparency helps reduce bias by allowing stakeholders to understand and challenge AI decisions, especially when outcomes disproportionately impact certain groups. Additionally, accountability relies on transparency to accurately attribute responsibility; Raji et al., (2020) note that transparent processes, like algorithmic audits, help hold developers accountable for design choices that may lead to bias. However, achieving transparency while safeguarding user privacy and proprietary information presents challenges, creating a tension between openness and confidentiality (Ananny & Crawford, 2018). Moreover, mechanisms for accountability are necessary to uphold ethical standards over time. Wachter, Mittelstadt, and Floridi (2017) emphasize that establishing accountability frameworks can support ongoing efforts to reduce bias and enhance transparency, fostering an ethical "feedback loop" for system improvement.
6.2 Systemic Challenges
Balancing these ethical concerns involves systemic challenges, as pursuing one goal can sometimes compromise another. For instance, enhancing transparency by disclosing model details may inadvertently reveal vulnerabilities or lead to the misuse of proprietary information, complicating organizations' efforts to protect intellectual property while promoting accountability (Jobin, Ienca, & Vayena, 2019). Additionally, bias reduction efforts may require complex algorithmic adjustments that limit AI model interpretability, further complicating transparency initiatives. Ensuring accountability for AI decisions adds to this complexity, as companies may hesitate to disclose AI-related information due to reputational concerns. This hesitancy can erode public trust in AI systems, particularly when perceived as a lack of transparency obscuring bias or ethical issues. Whittaker et al., (2018) argue that designing AI systems that balance fairness, transparency, and accountability demands a multifaceted approach that addresses competing concerns to establish trustworthiness and reliability. Thus, a holistic approach to AI ethics is vital, where bias mitigation, transparency, and accountability are addressed together, reinforcing each other to create ethical, trusted AI systems.
7.0 Future Directions and Recommendations
7.1 Innovative Approaches
To tackle ethical concerns in AI-driven communication, innovative methods like decentralized AI and robust ethical frameworks are emerging as solutions. Decentralized AI enhances transparency and accountability by distributing development and decision-making across multiple nodes, reducing centralized biases and giving users more control (Shen et al., 2021). This contrasts with traditional centralized systems that obscure data processes and accountability, particularly when proprietary information limits transparency.
Ethical frameworks specifically designed for AI are also gaining traction. Rooted in principles like fairness, accountability, and transparency, these frameworks guide AI development to align with societal values (Floridi & Cowls, 2019). By integrating such frameworks into AI design, developers can embed ethical considerations directly into systems, minimizing bias and enhancing user trust.
7.2 Guidelines for Stakeholders
For developers, best practices in ethical AI involve incorporating fairness and bias-detection mechanisms throughout the development lifecycle, along with transparent reporting and regular algorithmic audits to ensure accountability. Policymakers can support ethical AI by establishing regulations that prioritize user consent, fairness, and openness in AI operations while safeguarding intellectual property (Jobin, Ienca, & Vayena, 2019). Communicators and AI users also require clear guidelines for data handling and model interpretation to foster public understanding and trust. They should convey AI processes transparently and responsibly, particularly in sensitive decision-making contexts, bridging the knowledge gap and mitigating misinformation risks.
7.3 Research Gaps
Despite progress, research gaps remain that could further advance ethical AI. More studies are needed to develop scalable, interpretable AI models that balance transparency and complexity without compromising ethical standards. Additionally, exploring decentralized AI structures' implications for promoting ethical principles is essential, as these models are still underexplored (Shen et al., 2021). Another priority is creating frameworks to assess AI's societal impact, particularly regarding how these systems influence behavior and perceptions. Establishing rigorous, evidence-based metrics to measure the effectiveness of bias mitigation and transparency initiatives will strengthen future AI systems in communication, ensuring they remain ethically sound and aligned with public expectations.
Conclusion
The integration of AI in communication presents significant challenges that must be addressed to uphold public trust and fairness. Key issues such as bias, transparency, and accountability can distort messages and obscure decision-making processes. This review emphasizes the need for a cohesive approach to ethical AI, combining ethical frameworks, clear transparency, and robust accountability measures. Through collaboration among developers, policymakers, and communicators, we can guarantee that AI improves communication in a responsible manner, in line with our societal values and fostering positive results
REFERENCES
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.
Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook’s ad delivery can lead to biased outcomes. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-30.
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989.
Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. Springer.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.
Brennen, S. (2020). The importance of media literacy in a digital age. Journal of Communication, 70(3), 299-308.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the Conference on Fairness, Accountability, and Transparency, 77-91.
Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1-12.
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505-528.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Crawford, K., & Paglen, T. (2019). Excavating AI: The politics of images in machine learning training sets. AI Now Institute.
Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2018). "I always assumed that I wasn’t really that close to [her]": Reasoning about invisible algorithms in news feeds. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1-14.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. Berkman Klein Center Research Publication, (2020-1).
Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1-8.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1-15.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1-42.
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25.
Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Communications of the ACM, 61(10), 36-43.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
National Institute of Standards and Technology. (2023). AI Risk Management Framework. U.S. Department of Commerce.
O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., & Winfield, A. F. (2019). Machine behaviour. Nature, 568(7753), 477-486.
Raji, I. D., Bender, E., Denton, E., Hanna, A., Mitchell, M., & Gebru, T. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Conference on Fairness, Accountability, and Transparency, 33–44.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.
Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2019). Green AI. Communications of the ACM, 63(12), 54-63.
Speicher, T., Ali, M., Heidari, H., Grgic-Hlaca, N., Gummadi, K. P., & Weller, A. (2018). A unified framework for measuring and mitigating unintended bias in machine learning. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 67-77.
Van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society: Public Values in a Connective World. Oxford University Press.
Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99.
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S., Richardson, R., Schultz, J., & Schwartz, O. (2018). AI Now Report 2018. AI Now Institute, New York University.
Zednik, C. (2019). Solving the black box problem: A normative framework for explainable artificial intelligence. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 193-199.
[...]
- Arbeit zitieren
- Leycot Anyanwu (Autor:in), 2024, Ethics of AI in Communication. A Review of Bias, Transparency, and Accountability, München, GRIN Verlag, https://www.grin.com/document/1543865