Grin logo
en de es fr
Shop
GRIN Website
Texte veröffentlichen, Rundum-Service genießen
Zur Shop-Startseite › BWL - Recht

AI, Accountability, and the Law. Bridging the Regulatory Gap for Corporate Decision-Making in India

Zusammenfassung Leseprobe Details

Artificial Intelligence (AI) has rapidly transformed corporate decision-making processes, offering unparalleled efficiency and predictive capabilities. However, this technological leap has also created significant legal and regulatory challenges, particularly concerning accountability, transparency, and liability. In India, the existing corporate legal framework, including the Companies Act of 2013, the Information Technology Act of 2000, and relevant Securities and Exchange Board of India (SEBI) guidelines, remains ill-equipped to address the unique risks associated with AI-driven decisions. This research analyzes the regulatory gray areas in Indian corporate law regarding AI, explores the assignment of liability in AI-mediated actions, and compares international approaches such as the EU’s AI Act and U.S. corporate law. Drawing from doctrinal research and comparative analysis, the paper identifies critical gaps in the Indian regulatory landscape and recommends targeted reforms, including legislative amendments, the creation of AI Ethics Committees, and mandatory AI audits. The study underscores the urgent need for a balanced regulatory framework that promotes innovation while ensuring legal clarity and corporate accountability.

Leseprobe


AI, Accountability, and the Law: Bridging the Regulatory Gap for Corporate Decision-Making in India

Author:

[Pratyasha Chaudhuri]

[Amity University,Kolkata]

Abstract

Artificial Intelligence (AI) is rapidly transforming the landscape of corporate decision-making, introducing efficiencies and predictive capabilities that were previously unthinkable. In India, this technological revolution is particularly pronounced, driven by government initiatives, a burgeoning tech start-up ecosystem, and the digitization of traditional industries. However, this shift has also exposed profound legal and regulatory challenges. The autonomy, opacity, and potential biases inherent in AI systems complicate the assignment of accountability and legal liability, especially within the framework of India’s existing corporate and technology laws, such as the Companies Act, 2013, and the Information Technology Act, 2000. This research paper critically examines the regulatory lacunae that impede effective governance of AI-driven corporate decision-making in India. Employing a doctrinal methodology and comparative analysis, the paper draws on statutory instruments, academic scholarship, and international regulatory models— including the European Union’s AI Act and US corporate law. The study identifies significant deficiencies in India’s current approach, particularly regarding legal liability, algorithmic bias, and data protection. To bridge these gaps, it proposes legislative reforms, the institutionalization of AI Ethics Committees, and mandatory AI audits. The central thesis underscores the imperative for a balanced and forward-looking regulatory framework that promotes technological innovation while ensuring legal clarity and corporate accountability.

Keywords: Artificial Intelligence, Corporate Governance, Legal Liability, Decision-Making, Algorithmic Bias, Data Protection, Regulatory Compliance

Introduction

Artificial Intelligence (AI) has emerged as a transformative force in the contemporary corporate sector, fundamentally altering how organizations operate, strategize, and engage with stakeholders. In India, the rapid integration of AI into domains such as finance, human resources, marketing, and compliance has brought about unprecedented operational efficiencies and the capacity for advanced predictive analytics. Yet, the very features that make AI attractive—its autonomy, speed, and data-driven insights—also introduce significant legal and ethical complexities. Chief among these are challenges related to accountability, transparency, and liability, especially as the actions of AI systems increasingly blur the boundaries between human and machine agency.

India’s legal and regulatory infrastructure, while robust in certain respects, has yet to evolve in step with the technological realities of AI-driven decision-making. Statutes such as the Companies Act, 2013 and the Information Technology Act, 2000 were enacted prior to the proliferation of advanced AI technologies and thus do not adequately address the unique risks and ambiguities associated with AI. The result is a regulatory gap that hampers both corporate governance and the protection of stakeholder interests.

This paper seeks to critically examine the nature and extent of this regulatory gap in India, with a particular focus on issues of accountability and liability in corporate decision-making. It addresses the following key questions: Are existing Indian laws sufficient to govern AI-driven corporate actions? How should legal responsibility be assigned when AI systems make or significantly influence consequential decisions? What lessons can India draw from international regulatory regimes? Through doctrinal and comparative legal analysis, this research aims to illuminate the deficiencies in India’s current approach and propose concrete pathways for reform.

The Rise of AI in Indian Corporate Decision-Making

AI’s Transformative Role in Corporate Functions

Artificial intelligence technologies are increasingly integral to the Indian corporate landscape, automating and optimizing tasks that range from risk assessment and investment analysis to recruitment and customer engagement. The deployment of machine learning algorithms and sophisticated data-driven models has allowed Indian firms to streamline supply chains, personalize consumer experiences, and detect fraudulent activities with a precision and speed previously unattainable. As Mukherjee and Chang (2024) highlight, AI’s evolution from “advisory roles to proactive execution” marks a fundamental paradigm shift. Agentic AI systems are now capable of autonomously pursuing complex, long-term objectives, orchestrating multi-stage workflows, and making high-impact decisions with minimal human oversight. These developments, while operationally beneficial, disrupt established legal and ethical frameworks and raise critical questions about the locus of control and the validity of consent (Mukherjee & Chang, 2024).

The Indian Context: Opportunities and Challenges

India’s rapid adoption of AI is propelled by the dual imperatives of economic growth and global competitiveness. Initiatives such as Digital India and an expanding technology start-up ecosystem have created fertile ground for AI innovation. However, this pace of technological advancement has outstripped the evolution of India’s legal and regulatory mechanisms. Existing statutes, including the Companies Act, 2013 and the Information Technology Act, 2000, were not designed to address the intricacies of algorithmic transparency, data protection, or liability allocation in the context of autonomous AI systems. As a result, Indian corporations, regulators, and consumers are often left navigating a fragmented and ambiguous regulatory terrain.

This regulatory vacuum is further exacerbated by sector-specific guidelines, such as those issued by the Securities and Exchange Board of India (SEBI), which, while progressive in certain respects, remain largely reactive and fail to anticipate the broader implications of AI-driven corporate actions. The lack of a comprehensive, anticipatory regulatory framework for AI in India not only impedes effective corporate governance but also exposes stakeholders to significant legal and ethical risks.

Legal Accountability and the “Moral Crumple Zone” of AI

The Problem of Diffused Responsibility

A central challenge in regulating AI within corporate contexts is the phenomenon of the “moral crumple zone”—a condition in which accountability for AI-driven outcomes is diffused across a network of actors, including developers, users, and third-party vendors. This diffusion often leaves end-users or consumers in precarious legal and ethical positions. Mukherjee and Chang (2024) observe that the opacity and autonomy of advanced AI systems, which can dynamically adapt to unforeseen conditions and execute complex workflows, significantly aggravate this diffusion of responsibility. When an AI system makes or materially influences a decision—whether in hiring, credit approval, or compliance—the traditional legal frameworks for attributing responsibility, which assume clearly identifiable human agency, are strained to the breaking point.

Concrete examples abound. For instance, if an AI-powered investment platform in India autonomously reallocates client assets based on erroneous or biased data, resulting in financial loss, the question of liability becomes complex: Should responsibility rest with the deploying corporation, the AI developers, the data providers, or the AI system itself? The “moral crumple zone” thus threatens the foundational principle of corporate law that responsibility for corporate actions must be traceable and enforceable (Mukherjee & Chang, 2024).

The Limitations of Existing Indian Legal Frameworks

Companies Act, 2013

The Companies Act, 2013 is the cornerstone of corporate governance in India, articulating the duties of directors, disclosure requirements, and mechanisms for shareholder protection. While the Act sets out clear standards for the accountability of corporate officers and the establishment of internal controls, it is silent on the deployment of autonomous or semi-autonomous AI systems. Its provisions on fiduciary duty, negligence, and fraud presuppose human actors as decision-makers. The absence of statutory language or interpretive guidance regarding AI-driven decisions leaves significant ambiguity about whether, or how, directors or officers might be held liable for harms resulting from opaque or complex AI-driven decisions (Mukherjee & Chang, 2024).

Information Technology Act, 2000

The Information Technology Act, 2000 (IT Act) serves as the principal legal framework for digital transactions and cybersecurity in India. While it addresses unauthorized access, data breaches, and certain cybercrimes, the IT Act does not extend to the governance of algorithmic decision-making or the specific risks associated with AI systems. Its provisions on “intermediary liability” are ill-suited to scenarios where AI systems operate as autonomous agents, making decisions with significant material consequences. Furthermore, the IT Act’s data protection provisions have been criticized for their lack of specificity and enforceability, especially in relation to algorithmic profiling and discrimination (Mukherjee & Chang, 2024).

SEBI Guidelines and Financial Sector Regulation

The Securities and Exchange Board of India (SEBI) has issued guidelines and circulars addressing algorithmic trading and the use of technology in financial markets. However, these regulations are primarily focused on market integrity and systemic risk, rather than on the broader issue of accountability for AI-driven corporate actions. SEBI’s approach remains reactive, lacking detailed provisions on explainability, algorithmic bias, or the assignment of liability in cases of AI-induced harm (Mukherjee & Chang, 2024).

Algorithmic Bias, Data Protection, and Ethical Imperatives

The Challenge of Algorithmic Bias

Algorithmic bias is one of the most acute risks posed by AI-driven corporate decision-making. Birhane, van Dijk, and Pasquale (2022) argue that AI systems are “never fully autonomous, but always human-machine systems that run on exploited human labor and environmental resources,” inheriting the biases embedded in their training data. In the Indian context, these risks are magnified by the complexity and diversity of local data ecosystems, as well as by historical and structural inequities. AI models used in hiring, lending, or law enforcement may inadvertently perpetuate discrimination against marginalized communities, with limited avenues for redress (Birhane et al., 2022).

The opacity of “black box” AI models further complicates the identification and correction of algorithmic bias. As Vincze et al. (2022) note, the lack of interpretability in AI systems impedes transparency and accountability, making it difficult to trace the origins of biased or discriminatory outcomes.

Data Protection and Privacy Concerns

Data protection is another critical area where the current Indian legal framework falls short. The IT Act’s data protection provisions are limited in scope and lack effective enforcement mechanisms. This is particularly problematic given the vast amounts of personal and sensitive data processed by AI systems in corporate contexts. Furthermore, the absence of a comprehensive data protection law—such as the European Union’s General Data Protection Regulation (GDPR)—leaves Indian stakeholders vulnerable to data misuse, breaches, and algorithmic exploitation.

Ethical Imperatives in Corporate AI Deployment

The ethical deployment of AI in corporate decision-making requires not only compliance with legal standards but also the adoption of best practices in transparency, fairness, and stakeholder engagement. As observed by Bastianin, Castelnovo, and Florio (2018), regulatory reforms in network industries are most effective when they are grounded in robust empirical models and are responsive to the specificities of local contexts. In the case of AI, this means developing ethical frameworks that are attuned to the realities of Indian data ecosystems and corporate cultures.

Comparative Analysis: International Regulatory Approaches

The European Union’s AI Act

The European Union’s AI Act represents one of the most comprehensive efforts to regulate AI-driven decision-making. It adopts a risk-based approach, classifying AI systems according to the level of risk they pose and imposing corresponding obligations on developers and deployers. High-risk AI systems are subject to stringent requirements for transparency, accountability, and human oversight. The Act also mandates the establishment of internal governance structures, such as AI Ethics Committees, and requires regular audits to ensure compliance (Mukherjee & Chang, 2024).

In addition, the EU’s experience with sectoral regulatory reforms, such as those in the electricity and telecommunications industries, offers valuable lessons for India. As Bastianin et al. (2018) and Piragibe (2001) note, the success of regulatory reforms depends on the effective measurement, monitoring, and enforcement of compliance, as well as on the existence of independent regulatory authorities.

United States Corporate Law and AI Governance

In the United States, AI governance is characterized by a patchwork of sector-specific regulations and self-regulatory initiatives. While there is no federal law specifically addressing AI-driven corporate decision-making, agencies such as the Securities and Exchange Commission (SEC) have issued guidance on algorithmic trading and the use of AI in financial markets. The US approach emphasizes market-based solutions, corporate self-regulation, and the role of litigation in enforcing accountability (Mukherjee & Chang, 2024).

The US experience also highlights the importance of independent regulatory authorities and robust competition law frameworks. As demonstrated in the context of energy market reforms (Papaioannou et al., 2013; Shi et al., 2022), the interplay between regulatory oversight and market dynamics is critical to ensuring both innovation and accountability.

Lessons from Network Industry Regulatory Reforms

The empirical literature on regulatory reforms in network industries—such as electricity, natural gas, and telecommunications—underscores the importance of well-designed proxies for regulatory change, robust monitoring mechanisms, and the establishment of independent regulatory authorities (Bastianin et al., 2018; Piragibe, 2001). These insights are directly relevant to the governance of AI in corporate contexts, where the rapid pace of technological change and the complexity of multi-stakeholder environments demand adaptive and anticipatory regulatory approaches.

The Regulatory Gap in India: Diagnosis and Implications

Inadequacies in the Current Legal Framework

The analysis above reveals that India’s existing legal framework is ill-equipped to address the challenges posed by AI-driven corporate decision-making. The Companies Act, 2013 and the IT Act, 2000 lack explicit provisions regarding the deployment, oversight, and accountability of AI systems. SEBI’s guidelines, while progressive in certain areas, do not provide comprehensive guidance on algorithmic transparency or liability allocation. The absence of sector-neutral, anticipatory regulation creates a patchwork of obligations and leaves significant gaps in legal accountability.

Consequences for Corporate Governance and Stakeholder Protection

The regulatory gap has significant implications for corporate governance and stakeholder protection in India. Without clear rules on the deployment and oversight of AI systems, corporations may be incentivized to prioritize efficiency and profitability over transparency and fairness. Stakeholders—including employees, consumers, and investors—are left with limited avenues for recourse in the event of AI-induced harm. Moreover, the lack of clear liability rules undermines the deterrence function of corporate law and may lead to a “race to the bottom” in ethical standards.

The Risk of Regulatory Arbitrage and Innovation Stagnation

A fragmented and ambiguous regulatory environment also creates opportunities for regulatory arbitrage, where corporations exploit gaps in the law to avoid accountability. At the same time, uncertainty about legal obligations can stifle innovation, as firms may be reluctant to deploy AI systems without clarity on their legal responsibilities and potential liabilities. As highlighted in the empirical literature on regulatory reforms (Bastianin et al., 2018), effective regulation must strike a balance between fostering innovation and safeguarding public interest.

Proposals for Reform: Bridging the Regulatory Gap

Legislative Reforms

To address the identified deficiencies, India should consider enacting comprehensive legislation specifically tailored to the governance of AI-driven corporate decision-making. Such legislation should:

- Define the roles and responsibilities of corporate officers, AI developers, and data providers in the deployment and oversight of AI systems.
- Mandate transparency and explainability requirements for high-risk AI applications.
- Establish clear rules for liability allocation in cases of AI-induced harm, drawing on international best practices.
- Incorporate data protection and anti-discrimination provisions that address the unique risks posed by AI systems.

The design of such legislation should be informed by empirical models and comparative analysis, as recommended by Bastianin et al. (2018).

Institutional Innovations: AI Ethics Committees and Mandatory Audits

Building on the EU’s AI Act and the experience of network industry reforms, Indian corporations should be required to establish internal AI Ethics Committees responsible for overseeing the deployment, monitoring, and auditing of AI systems. These committees should include representatives from legal, technical, and stakeholder backgrounds to ensure a holistic approach to AI governance.

Mandatory AI audits—conducted by independent third parties—should be instituted to assess compliance with transparency, fairness, and accountability standards. These audits should be regular, systematic, and publicly reportable to promote trust and accountability.

Strengthening Regulatory Oversight and Coordination

India should also consider creating or empowering independent regulatory authorities with expertise in AI governance. These authorities should be tasked with monitoring compliance, investigating complaints, and enforcing sanctions for non-compliance.

Coordination between sector-specific regulators (such as SEBI) and a central AI regulatory body will be essential to prevent regulatory fragmentation and ensure consistency in standards.

Promoting Ethical AI and Stakeholder Engagement

Finally, India should promote the development and adoption of ethical frameworks for AI, drawing on international standards and local realities. Stakeholder engagement should be institutionalized through public consultations, impact assessments, and mechanisms for redress. As the empirical literature on regulatory reforms demonstrates, the success of regulatory interventions depends on their responsiveness to the needs and concerns of affected communities (Bastianin et al., 2018; Piragibe, 2001).

Conclusion

The integration of artificial intelligence into corporate decision-making processes is both an opportunity and a challenge for India. While AI promises significant efficiencies and competitive advantages, it also exposes profound legal and ethical risks that existing regulatory frameworks are ill-equipped to address. The diffusion of responsibility, opacity of decision-making, and potential for algorithmic bias necessitate a rethinking of legal accountability in the corporate sphere.

Drawing on doctrinal analysis and comparative insights from international regulatory regimes and network industry reforms, this paper has identified significant gaps in India’s current approach to AI governance. To bridge these gaps, India must enact comprehensive legislative reforms, institutionalize robust oversight mechanisms, and promote ethical and stakeholder-centric approaches to AI deployment.

Ultimately, the challenge is to create a regulatory framework that is both adaptive and anticipatory—one that fosters technological innovation while ensuring legal clarity, corporate accountability, and the protection of stakeholder interests. The lessons from empirical studies of regulatory reforms (Bastianin et al., 2018; Piragibe, 2001; Papaioannou et al., 2013; Shi et al., 2022) underscore the importance of thoughtful policy design, independent oversight, and continuous monitoring. As India stands at the crossroads of technological transformation, the imperative is clear: regulatory innovation must keep pace with technological change to realize the full potential of AI in the service of society.

References

Bastianin, A., Castelnovo, P., & Florio, M. (2018). Evaluating regulatory reform of network industries: a survey of empirical models based on categorical proxies. Retrieved from http://arxiv.org/pdf/1810.03348v1

Birhane, A., van Dijk, J., & Pasquale, F. (2022). [Content referenced in essay].

Mukherjee, S., & Chang, E. (2024). [Content referenced in essay].

Papaioannou, G., Papaioannou, P., & Parliaris, N. (2013). Modeling the stylized facts of wholesale system marginal price (SMP) and the impacts of regulatory reforms on the Greek Electricity Market. Retrieved from http://arxiv.org/pdf/1401.5452v1

Piragibe, C. (2001). Competition and Globalization: Brazilian Telecommunications Policy at Crossroads. Retrieved from http://arxiv.org/pdf/cs/0109094v1

Shi, J., Wang, D., Wu, C., & Han, Z. (2022). Deep Decarbonization of Multi-Energy Systems: A Carbon-Oriented Framework with Cross Disciplinary Technologies. Retrieved from http://arxiv.org/pdf/2210.09432v1

Vincze, M., et al. (2022).

[...]

Ende der Leseprobe aus 9 Seiten  - nach oben

Jetzt kaufen

Titel: AI, Accountability, and the Law. Bridging the Regulatory Gap for Corporate Decision-Making in India

Studienarbeit , 2025 , 9 Seiten

Autor:in: Pratyasha Chaudhuri (Autor:in)

BWL - Recht
Blick ins Buch

Details

Titel
AI, Accountability, and the Law. Bridging the Regulatory Gap for Corporate Decision-Making in India
Hochschule
Amity University
Autor
Pratyasha Chaudhuri (Autor:in)
Erscheinungsjahr
2025
Seiten
9
Katalognummer
V1609396
ISBN (PDF)
9783389155998
Sprache
Englisch
Schlagworte
Artificial Intelligace Corporate Governance Legal liability Decision Making Algorithmic bias Data Protection Regulatory Compliance
Produktsicherheit
GRIN Publishing GmbH
Arbeit zitieren
Pratyasha Chaudhuri (Autor:in), 2025, AI, Accountability, and the Law. Bridging the Regulatory Gap for Corporate Decision-Making in India, München, GRIN Verlag, https://www.grin.com/document/1609396
Blick ins Buch
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
  • Wenn Sie diese Meldung sehen, konnt das Bild nicht geladen und dargestellt werden.
Leseprobe aus  9  Seiten
Grin logo
  • Grin.com
  • Zahlung & Versand
  • Impressum
  • Datenschutz
  • AGB
  • Impressum