Towards A Rights-Based AI Framework In India: Bridging Global Models And Constitutional Duties
Shreya Tiwari
26 July 2025 2:38 PM IST

Artificial Intelligence systems, when trained on biased data, risk institutionalizing discrimination against caste, class, and gender minorities. In India, where welfare schemes are lifelines for millions, algorithmic exclusion can have severe real-world consequences. A stark international parallel is Australia's Robodebt scandal1, where an automated system wrongly accused over 400,000 welfare recipients of income fraud—resulting in fines, distress, and eventual judicial condemnation in 2019.
Closer home, predictive policing systems like Delhi's CMAPS (Crime Mapping Analytics and Predictive System)2 replicate the biases of human police officers. Areas dominated by caste and religious minorities are disproportionately targeted, creating a self-reinforcing surveillance loop. In EdTech too, students from marginalized communities were disproportionately flagged during AI-based proctoring in the COVID lockdown, where the system mistook nervousness or discomfort with cameras as suspicious behavior.
These examples point to a dangerous gap: India lacks a legal or policy framework that provides enforceable individual rights against AI-driven decisions. Who is accountable when AI discriminates? What redressal exists when public algorithms malfunction? There is an urgent need for a rights-based AI regulation that guarantees transparency, explanation, appeal, redressal, and non-discrimination—principles rooted in the constitutional values upheld by the Supreme Court in K.S. Puttaswamy v. Union of India3.
The Legal Vacuum In Existing Frameworks
India currently lacks a comprehensive legal framework specifically regulating AI. Despite this, AI is being deployed across critical sectors like healthcare, banking, education, and governance. This raises serious concerns when AI systems make decisions that affect fundamental rights.
The Digital Personal Data Protection (DPDP) Act, 20234 is a step forward in data governance. It includes provisions on user consent and purpose limitation, aiming to prevent misuse of personal data. However, it fails to address critical AI-specific challenges like black-box decision-making, algorithmic discrimination, and the absence of human oversight.
Consider, for example, a credible loan applicant from a rural or economically weaker background who is fully eligible for a loan, but whose request is denied by an opaque AI system used by a public sector bank. The rejection may stem from biased patterns in the algorithm's training data — such as the applicant's residential pin code, education level, or socio-economic status. Who is responsible when such systems violate individual rights — the developer who designed the model, or the institution that deployed it without adequate oversight? How can the individual understand or contest a decision made by a system they cannot interpret? Such unchecked technological control could set society back by decades, introducing a modern form of discrimination that India's legal system is currently ill-equipped to address.
This highlights the need for a standalone AI regulation or Digital Rights Charter to ensure algorithmic decisions are subject to constitutional scrutiny, especially in high-impact public sectors.
What Is A Rights-Based AI Framework?
A rights-based AI framework places individual rights and constitutional values at the center of AI regulation. It aims to balance technological advancement with justice, accountability, and human dignity. Such a framework must be grounded in enforceable rights like transparency, consent, fairness, and redressal.
Right To Explanation
Article 19(1)(a) of the Indian Constitution includes the right to information as a fundamental right, as interpreted by the Supreme Court in Union of India v. Association for Democratic Reforms5 judgement. Accordingly, there should be a statutory obligation on AI developers to design systems that are interpretable and transparent.
The European Union's AI Act6 offers a relevant model — it classifies AI systems based on risk levels and under Article 86, grants affected individuals the right to request clear and meaningful explanations regarding an AI system's role in decision-making.
In India, It must be mandatory for AI systems used in high-stakes sectors such as healthcare, banking, education, and law enforcement to provide clear and understandable explanations for their decisions. These explanations must be timely, accessible, and in language or formats understandable to the affected person — ensuring accountability beyond technical outputs.
Right To Appeal And Contest
A rights-based AI framework must include the right to human review of automated decisions. This encompasses transparency, timely notice, access to personal data used, and the ability to contest decisions made solely by AI.
To operationalize this, India must establish dedicated tribunals or courts for AI-related grievances. An AI Governance Board of India could oversee these institutions. The Board may include:
- One AI technology expert,
- One retired judge,
- One representative from MeitY (Ministry of electronic and information technology),
- One member nominated by the Central Government.
This Board would establish standards and procedures for redressal authorities, much like the BCI's role in regulating the legal profession and legal education. Appeal mechanisms must serve both individual grievances and broader structural accountability. Current systems like RTI or consumer courts are not equipped to manage the technical nuance of algorithmic harm. A specialized AI tribunal would fill this gap.
Right To Non-Discrimination: Algorithmic Fairness
Algorithms must uphold the constitutional values of equality, justice, and fairness, as enshrined in Articles 14, 19, and 21. Opaque decision-making can reinforce systemic bias. Given that AI has become part of our social infrastructure, it must reflect the values guiding public institutions. The goal is to ensure that equals are treated equally and that AI does not exacerbate social inequities.
Right To Consent And Notification
Individuals must know when AI is being used in decisions affecting them. Consent must be informed, voluntary, and uncoerced. The U.S. AI Bill of Rights blueprint7 released in 2024 includes this right as foundational — mandating transparent communication in accessible language. India should adopt similar safeguards in its forthcoming AI legislation.
Right To Redressal
There must be accessible forums for grievance redressal and fair compensation. Whether it is a loan denial, welfare exclusion, or educational penalty, victims of algorithmic harm must receive timely, effective, and enforceable remedies. Without this, trust in AI systems will erode, and constitutional due process will be compromised.
India must ensure that AI development is grounded in ethics, rights, and constitutional values. Responsibility must rest not only with the technology but also with its developers and the institutions deploying it. Just as India mandates Environmental Impact Assessments (EIA), it should introduce AI Impact Assessments (AIA) for high-risk systems. These must be transparent, participatory, and stakeholder-inclusive.
A proposed Artificial Intelligence Board of India could oversee these assessments and guide grievance redressal structures. The regulatory framework must define AI broadly, covering any automated system operating without human oversight.
India has the opportunity to create a hybrid model that combines rights-based protections with the EU's risk-based classification approach. Such a framework will promote innovation while safeguarding justice and dignity for all.
A strong AI rights framework is not anti-innovation; it is pro-democracy. In a country with deep social inequities and strong constitutional protections, ensuring AI respects fundamental rights is not just desirable — it is essential.
Author is Final year law student at Campus law center, Delhi University. Views Are Personal.
References
1. Royal Commission into the Robodebt Scheme, Final Report, July 2023.
2. Marda, Vidushi and Narayan, Divij. 'Data in New Delhi's Predictive Policing System.' (2020), https://datajusticeproject.net/wp-content/uploads/sites/30/2020/07/New-Delhi-Predictive-Policing.pdf
3. K.S. Puttaswamy v. Union of India3, (2017) 10 SCC 1.
4. Digital Personal Data Protection Act, 2023 (Act No. 22 of 2023).
5. Union of India v. Association for Democratic Reforms5, (2002) 5 SCC 294.
6. Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), COM/2021/206 final, European Commission, Article 86.
7. White House Office of Science and Technology Policy, 'Blueprint for an AI Bill of Rights,' October 2022.
Author : Shreya tiwari, Final year law student at Campus law center, Delhi University