AI In Policing: Framing Issue Of Regulation

Dr. Pupul Dutta Prasad

4 Oct 2025 9:30 AM IST

  • AI In Policing: Framing Issue Of Regulation
    Listen to this Article

    Several police forces in India are embracing artificial intelligence (AI) tools to enhance their capabilities in areas such as preventing, detecting and investigating crime, apprehending offenders, and managing traffic operations. For example, Visakhapatnam Police has recently announced plans to implement AI-powered automatic number plate recognition cameras along with facial recognition cameras at key traffic junctions. These technologies are intended to improve the identification and enforcement of traffic rule violations as well as provide real-time surveillance to assist in catching criminal suspects. Goa police has launched an AI-driven investigative tool ('Deep Trace') that utilises publicly accessible data associated with mobile numbers, PAN cards, vehicle registrations, and other identifiers to track the digital footprints of suspects. Delhi Police reports that it has been using a satellite-linked Crime Mapping Analytics and Predictive System (CMAPS) to identify potential 'crime hotspots' and prevent criminal activity.

    The promise and the peril

    Given the persistent challenge of limited human resources within the police, the potential of AI to support a range of policing functions is expected to attract growing interest in the future. This is understandable from a utilitarian point of view. Take the recent hit-and-run case in Nagpur (Maharashtra) where the police were able to trace an unidentified vehicle involved in the fatal incident through AI-assisted review of four hours of CCTV footage. Turning the investigation into a swift, data-driven process, AI algorithms saved the police hours of manual work and increased the likelihood of detection. If the police have hundreds of hours of CCTV footage to examine in relation to a serious crime, leveraging AI technology to maximise chances of a breakthrough might even be considered an obligation.

    AI is especially valuable in criminal investigations when traditional methods reach a dead end. In Delhi, the police used AI face reconstruction to identify an unrecognisable murder victim which ultimately led to the arrest of the perpetrator. Similarly, in Kerala, AI-generated age-progressed images helped solve a 19-year-old cold case involving a woman and her twin infants, enabling the capture of long-absconding suspects.

    It is probably just as well then that the police already seem convinced of the effectiveness of AI. At the same time, partly due to the uncritical enthusiasm shown by the police towards AI, there is a perception that it is too police-friendly and has a tendency to expand policing powers beyond established limits. The premise is that AI tools are being implemented 'without much information or transparency' and within a framework that prioritises operational efficiency and crime control over civil liberties. This perspective underlines the risks of excessive surveillance, over-policing, and bias, raising a rights-based counterpoint to the utilitarian argument and bringing crucial issues around fairness, accountability, and democratic oversight into sharp focus.

    The idea of an AI-specific law

    To address concerns about the use of AI by the police undermining citizens' rights, experts and scholars have advocated the need for its regulation through legislation (see, for example, Mohanty and Sahu (2024) and Murugesan (2021)). However, such calls often end up framing the problem as purely legal, not sufficiently thinking through the assumption that the enactment of a specialised law alone could ensure that AI in policing is used responsibly. In the process, they also tend to sidestep difficult questions, including whether it is even possible to regulate AI technologies effectively in practice, not to say whether AI-in-policing might be a particularly slippery target for regulation.

    While AI initiatives introduced or planned in policing are variously described as 'AI-enabled', 'AI-led', and 'AI-driven', it is important to recognise their essential nature. Demis Hassabis, Nobel prize-winning CEO of Google DeepMind, asserts that today's AI remains firmly in the realm of additive technology—tools that extend and augment existing human capabilities rather than replace general human intelligence. Put differently, it is far removed from artificial general intelligence (AGI), which will be self-directing and capable of human-like cognitive abilities. However, Hassabis predicts that paradigm-shifting AGI could arrive in 5–10 years and suggests that it will necessitate entirely new governance models, not least because of the potential existential threat associated with it.

    Acknowledging the fundamental distinction between AI and AGI has two significant implications for the idea of legal regulation of the contemporary AI-based interventions. First, it seems obvious that AGI will leave a law intended for ongoing or more advanced AI versions outdated simply because of its transformational character. It will be a matter of when, not if. Relatedly, the second point to consider is that if AI's role will continue to be assistive rather than autonomous, existing legal boundaries (constitutional protections, criminal procedures, and privacy laws) should already cover AI-based actions. It may be argued that to demand a specific statute for imposing limits on how and for what purposes the police can use AI is to impliedly admit of an AI exceptionalism: as if adding AI makes certain police practices permissible when they otherwise would not be. This can detract from enforcing the foundational principles of criminal law such as presumption of innocence, proportionality, and due process when the priority should actually be on upholding them even in the face of new risks.

    Learning from the past

    Meanwhile, as the police in India increasingly adopt AI tools as force multipliers, experiences with a previous-generation assistive technology offer a valuable insight into what is necessary to safeguard against unjustified police actions. CompStat (short for 'computer statistics'), developed by the New York Police Department (NYPD) in the 1990s, is noted for pioneering data-driven decision-making in policing. The programme relies on historical crime data to identify hotspots and direct police resources with the aim of improving crime management. In doing so, it has also been found to incentivise the disproportionate targeting of historically overpoliced groups to boost performance metrics in terms of arrest, summons, and stop-and-frisk statistics. The lesson here is that a policing strategy not grounded in the values of justice, accountability, and transparency can reproduce or deepen systemic bias under the illusion of objectivity and efficiency. This remains valid irrespective of the nature of technology harnessed for policing. Significantly, one study has found that the use of facial recognition technology (FRT) by the police in Delhi is likely to result in a surveillance bias against certain sections of society.

    No doubt, AI differs from older technologies in powerful and unsettling ways, and its technical risks should not be underestimated. AI in policing may not autonomously 'decide', but it can shift who holds real influence over decisions. It can process massive datasets in real time, and the authoritative format of its outputs can lead to automation bias—overreliance on AI over human judgment. In the event, responsibility can become fragmented among police officers, technocratic managers, and vendors.

    The inherent opacity of machine learning systems combined with the discretionary aspects of police work makes regulation especially challenging. Moreover, for any regulation to be effective, it will have to be nimble so as to move with evolving AI knowledge. In addition, it will require international collaboration because digital tools have a global reach. Without addressing these complex issues, any bespoke legal framework for AI regulation at the domestic level will risk becoming more about formal compliance than about checking AI's misuse.

    Facilitation or regulation?

    Notably, Maharashtra, the state referenced earlier regarding the police's use of AI to resolve a hit-and-run case, has created a legal entity to facilitate more effective law enforcement through AI technologies. The state government has entered into an agreement with Indian Institute of Management, Nagpur, and Chennai-based M/s. Pinaka Technologies Private Limited in March 2024 to set up a special purpose vehicle (SPV) named “Maharashtra Advanced Research and Vigilance for Enhanced Law Enforcement (MARVEL)” registered as a private limited company. While MARVEL is fully owned by the government, it is reported that there is a standard operating procedure (SOP) to enable data sharing between the police department and the company. This approach needs to be studied carefully to see whether it manages to address some of the concerns discussed above.

    Ultimately, policing is not about efficiency at the cost of everything else. It must be constrained by rights. Any effort to ensure that the police employ AI to serve the public good requires a police force that operates within a framework of democracy and accountability. A policing ideology that seeks to balance efficiency with equity and surveillance with respect for rights is equally indispensable. While police leadership has an understandable interest in using AI to improve efficiency, a comparable level of engagement from them with broader democratic concerns will help foreground the public interest.

    (The writer is an IPS officer and holds a PhD in Social Policy from the London School of Economics and Political Science. He is currently working as Professor of Practice, Lloyd Law College, Greater Noida, on deputation. Views are personal.)


    Next Story