Governance Of Artificial Intelligence (AI) In India

Update: 2025-07-15 04:45 GMT
Click the Play button to listen to article

The word 'intelligence' has its etymological roots in the word 'intelligere', which means “to understand”. When an individual has the ability to comprehend and engage with a particular situation, he is said to possess the required degree of 'intelligence'. This ability to deal with a given scenario could be based on a variety of factors, which may include inherent proclivities, training, tendency to deviate from established norms in light of practical difficulties, etc.

The National Strategy for Artificial Intelligence (2018) defines 'Artificial Intelligence' (AI) as, “AI is a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act.” As every innovative idea is capable of being misused, abused, and spiralling out of control, it is regulated and, if need arises, restricted. In the case of AI, there is often a fear of 'AI gone rogue, ' wherein AI is said to determine the best interests of humans.

The Global Partnership on Artificial Intelligence (GPAI) conducted its Sixth Ministerial Council Meeting, which was chaired by India. The 2024 GPAI New Delhi Declaration, inter alia, acknowledged 'the emerging risks and challenges posed by AI systems, particularly advanced AI systems'. In this backdrop, it is essential to analyse the steps undertaken by India, through the Government and other bodies, to regulate the usage of AI.

In cases involving the usage of AI for false and fraudulent purposes, including generating pictures resembling individuals (popularly called 'Deepfakes'), the existing legal framework punishes such acts under conventional laws like the Indian Penal Code (now Bharatiya Nyaya Sanhita), IT Act, etc. However, these are more piecemeal solutions that offer little more than band-aid solutions. There is a real gap in the law because such criminal usage of AI is, at the moment, not specifically regulated by any law.

Government's approach towards AI Regulation:

  1. Digital India Act: In 2022, the Government of India announced that a new Act, entitled 'Digital India Act' would replace the existing Information Technology Act (IT Act). One of the principal reasons behind this was that the degree of development in the sectors governed by the IT Act was so high that mere amendments to the existing framework would have proved inadequate. Accordingly, in 2023, the Union Minister of State held consultations with stakeholders and a Presentation was put forth by the Ministry. However, the Presentation has been removed from the Ministry's website.
  2. DPDP Act: After nearly five years of drafting and redrafting, the Digital Personal Data Protection Act (DPDP Act) was finally passed in 2023. However, the Rules necessary to implement certain provisions of the Act are still in the making. A draft of the Rules was published for public comments on 03 January, 2025. However, the final rules are not yet published.

The definitions given in the Act point to the fact that processing of data by AI would be covered by the DPDP Act. 'Processing' covers 'wholly or partly automated operation or set of operations performed on digital personal data'. Similarly, the term 'Automated' is defined as 'any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data'.

Until the Rules are made, personal data would be governed by the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011. These Rules provide a very rudimentary framework for the protection of personal data.

  1. National Strategy for AI: The NITI Aayog published the National Strategy for Artificial Intelligence in 2018. It advised that regulations should be framed separately for each sector connected with AI. While dealing with the relationship between Privacy and AI, it encourages 'self-regulation' by using 'Data Privacy Impact Assessment Tools'. Another suggested approach is the implementation of a negligence test with a safe harbour provision to limit the liability when the necessary steps have been taken, instead of 'Strict Liability' for damages caused by AI software.
  2. Working Groups: The Govt. of India constituted seven working groups to collectively give recommendations on seven sectors called IndiaAI pillars. The combined Report of these working groups was published in October 2023.
  3. MeiTY sub-committee's recommendations: The Ministry of Electronics and Information Technology constituted a 'sub-committee' to analyse the gaps and offer recommendations for the development of a comprehensive framework for the governance of Artificial Intelligence. This 'sub-committee' sought feedback from the public on its proposed recommendations, which it released in the form of a report on 06 January, 2025.

The Report proposes certain principles for AI Governance:

  • Transparency; Accountability;
  • Safety,
  • Reliability & Robustness;
  • Privacy & Security;
  • Fairness & Non – Discrimination;
  • Human-centred values & 'do no harm';
  • Inclusive & sustainable innovation;
  • Digital by design governance.

It then lays down certain considerations upon which the principles could be operationalised.

This Report considers 'AI led bias and discrimination'. It highlights the distinction between the discrimination prevalent in a non-AI setting and the context of AI, and hints towards the necessity to regulate AI which are not biased in its operations.

Among the recommendations contained in this Report, the sub-committee stresses upon the necessity to implement 'a whole-of-Government' approach. That would entail bringing together various authorities, which would deal with the governance of AI at the national level, as a sustained, collaborative and coordinated approach by various bodies would ensure better regulation of AI.

Thus, it can be seen that the Government has been laying down the foundation of the creation of a comprehensive framework for the specialised regulation of cyberspace. Yet, no concrete steps have been taken by the Government in this regard.

Approach of other bodies and authorities:

1. Reserve Bank of India (RBI)

On 26 December 2024, the Reserve Bank of India (RBI) set up an eight-member committee to develop a 'framework for responsible and ethical enablement of artificial intelligence (AI) in the financial sector'. The Committee's terms of reference envisage a study of AI in the financial space both domestically and internationally and a recommendation of a framework for the Indian Financial Sector. The Committee's report and recommendations are awaited.

2. Securities and Exchange Board of India (SEBI)

On 20 June 2025, the Securities and Exchange Board of India (SEBI) has published a 'Consultation Paper' to obtain the views of the public and the stakeholders on the proposed 'guiding principles' on the responsible usage of AI and Machine Learning Applications/Models in the Securities Markets. The proposals are a result of the working group constituted by the SEBI. They have been made after taking into account NITI Aayog's Principles for Responsible AI (2021) and the existing guidelines of the International Organisation of Securities Commissions.

It is clear that the Government and various other authorities have begun the process of enacting comprehensive regulations around the usage of AI in their respective spheres. While the planning and the policy-making aim towards leaving no gaps in regulations, the real test would be the enforcement of such future regulations. In this, whether they would be followed in letter and spirit, only time will tell.


Author: Aastha Abhya, Founder And Managing Partner, Atreus Law Firm. Views are personal.


Tags:    

Similar News