The Mole That AI Revealed: Hallucinations And Biometric Privacy Risks
Harshali Chowdhary
2 Oct 2025 5:00 PM IST

A recent viral post shook Instagram's vintage saree trend. A user generated her photo through Gemini and was startled to see a mole appear on her left arm; a detail true in real life but hidden in the original, full-sleeved picture she had uploaded[1]. The uncanny addition sparked questions: did the AI somehow “know,” or was it simply inventing features? And more importantly, what does this mean for our privacy when we share images with AI tools?
At the outset, the real question is: what exactly happened here? The explanation lies not in mystery but in method in the way AI systems generate images. The mole was not an act of revelation, but a product of mathematics.
One of the most influential techniques behind such image generation is the Generative Adversarial Network (GAN)[2]. It was first introduced by Ian Goodfellow and his team in 2014, revolutionized how machines create new and realistic data. Unlike traditional models that only recognize or classify existing information, GANs take a creative approach and they generate entirely new content, whether images, videos, or music, that closely resembles real-world data. When parts of an image are obscured like a covered arm the model doesn't 'see through' the clothing. Instead, it predicts and fills in missing details using patterns it has learned from millions of training examples. Sometimes these predictions are harmless, like adding folds to fabric, but occasionally they align uncannily with reality, as in the case of the mole.
This brings us to an important distinction: when AI generates something that was never there in the original image, is it a simple hallucination or a deliberate deepfake[3]? These two terms are often used interchangeably in popular debate, but in fact they mean very different things in the world of artificial intelligence. An AI hallucination happens when the system invents details that don't exist like adding a mole or scar simply because it has seen similar patterns in its training data. A deepfake, by contrast, is a deliberate manipulation of real images or videos to make them appear genuine, such as swapping faces or fabricating events. In simple terms, hallucinations are accidental creations, while deepfakes are intentional deceptions.
So what happened in this mole case? It was not a deepfake but an AI hallucination. The system did not 'know' the user's hidden mole or scan beneath her clothing. Instead, trained on countless images of women in sarees, the AI statistically 'guessed' and added a mole where such features often appear. By coincidence, this fabricated detail overlapped with the user's real mole, making it seem as though the AI had uncovered something private. In reality, it was mathematics masquerading as insight. And this is not an isolated incident. Today, countless websites and apps from tools like Booth.ai to popular social media filters let users generate AI-modified images with a single prompt. While they promise creativity and convenience, they also raise serious concerns about privacy, consent, and the misuse of biometric-like features.
This is where the legal and ethical questions arise: if AI can fabricate features that resemble real biometric traits, how should the law treat such images as harmless creativity, or as potential violations of privacy and evidence?
Even if the mole was only a hallucination, it matters because it resembled a biometric marker something unique and identifying, like a scar, tattoo, or birthmark. The danger is that such fabricated details can mislead in sensitive contexts: from digital evidence in courts to identity verification systems or even profiling by surveillance technologies.
Globally, Regulators have only begun to respond to the risks of AI-generated media. Since 2018, the EU General Data Protection Regulation (GDPR) has governed the processing of biometric data as a form of personal data and, when used to uniquely identify individuals, as “special category data.” The processing of such data is generally prohibited unless the individual provides explicit consent or another legal ground under Article 9(2). Building on this foundation, the EU AI Act introduces a new layer of regulation that targets four types of biometric applications and classifies them by risk ranging from prohibited, to high risk, to limited risk, depending on their purpose and context of use[4].
China has gone beyond mere guidelines: it now legally mandates that any AI-generated media be clearly labeled, both via visible watermarking and metadata tags. This dual requirement ensures that even if visible labels are removed, there is still an “invisible trace” of the synthetic origin of the image[5].
In January 2025, the UK created a new offence criminalizing and penalising the creation of sexually explicit deepfakes without consent. It covers artificial images showing someone nude or engaged in sexual acts, whether made to humiliate or for gratification. Offenders face unlimited fines. The law complements the Online Safety Act, which already penalises the sharing of intimate images without consent[6].
In the United States, many states are now enacting laws to regulate the use of political deepfakes in election campaigns. For example, sixteen states passed laws in 2024 banning campaign ads or political communications that include deepfakes without proper labeling. Other states require disclosure if a campaign message includes manipulated media. These laws are aimed at preventing misinformation, impersonation, and misleading content in the lead-up to elections, though questions remain about enforcement and free speech trade-offs[7].
India, meanwhile, has taken some steps. The Bharatiya Sakshya Adhiniyam, 2023 attempts to modernize Indian evidentiary law by recognizing “computer outputs” as any data stored or produced by a computer or digital device as admissible in court. Section 63 explicitly allows such outputs, including those “stored, recorded or copied in any electronic form,” to be treated as documents, even without production of the original. This is a technologically expansive move, aimed at easing the evidentiary process in the digital age. However, Section 63 is not synthetically aware. It treats all electronic records as if they are neutral and factual without distinguishing between authentic digital records (like CCTV footage) and synthetically generated content, such as AI-created faces, hallucinated biometric features, or altered images. In the era of deepfakes, this distinction becomes critical. An AI-generated image of a person with a distinctive mole, for example, may appear convincingly real, but have no basis in reality. The law, as it stands, offers no procedural safeguards, disclosure norms, or tests of provenance to assess whether such evidence is synthetic or manipulated. This creates a loophole where fabricated visuals may pass legal muster under the broad label of “computer output.”
Other recent laws offer only partial coverage. The Digital Personal Data Protection Act, 2023, defines 'personal data' broadly under Section 2(t) as any data that can identify an individual. However, it does not specifically address biometric data, and certainly not synthetic biometrics digitally invented identifiers that mimic real ones, such as faces, irises, or fingerprints. Similarly, the Criminal Procedure (Identification) Act, 2022 expands the scope for collection and long-term retention of biometric and behavioural data from prisoners and suspects. But it remains silent on how to distinguish or treat AI-generated features that imitate such data. Even the Information Technology Act, 2000, while offering some help through Section 66D (which punishes impersonation via computer resources), does not grapple with the evidentiary status of AI-manipulated or AI-generated media. In a future courtroom, would a deepfake used to frame someone qualify as a valid “computer output” under Section 63? Or would it be dismissed as unauthenticated manipulation? As of now, there is no settled answer.
We cannot forget to refer to the unresolved question of copyright. Indian law ties authorship to a human creator, leaving AI-generated outputs in a grey zone. If a GAN produces an altered image, the individual's control over both the original and its derivatives remains unclear. This compounds privacy risks, especially when copyrighted works are also fed into training datasets without consent.
In short, law is still at nascent stage when it comes to build comprehensive framework for dealing with AI-generated features that mimic real-world identifiers whether it's a digitally hallucinated mole, a generated voice clip, or a fake iris scan. These features may be synthetic, but they are visually or audibly indistinguishable from the real. The absence of clear rules poses a serious threat to fair adjudication and evidentiary integrity, but also to individuality and privacy.
Lawmakers can hardly be blamed, the pace of technological development, especially in generative AI, is far outstripping the law's ability to catch up. But the mole incident is more than a curiosity. It is a wake-up call. Across jurisdictions, legal frameworks have yet to catch up with synthetic biometrics, evidentiary standards for AI-altered imagery, or disclosure norms for generative content. Whether and how these reforms should be introduced is a question that lawmakers, regulators, and courts will soon be forced to confront.
Nevertheless, beyond the legal vacuum lies a social reminder: we hand over our images to AI systems far too casually in the name of viral trends. Whether it is a vintage saree look or a Ghibli-style art filter, every generated image is built on vast pre-trained datasets and, in turn, may contribute to future training. The picture we share today may vanish from a timeline tomorrow, but the data behind it does not. There is little to blame on AI models here. They are doing exactly what they are trained to do: generate outputs based on pre-trained data. What really matters is human awareness: choosing carefully what we share, where we share it, and understanding the implications. Not every trend is harmless, and not every AI-generated output reflects reality. Yet there is also room for optimism: people are beginning to notice, to ask questions, and to have conversations about how their data is being used. These early discussions whether in courtrooms, classrooms, or on social media are the first step towards a more mindful relationship with AI. In the end, the choice of what we share, and what we don't, remains the strongest form of control in our hands.
Author is Additional District and Sessions Judge, Haryana. Views Are Personal
[1] Gemini AI vintage saree trend shocks woman with 'creepy' result. Here's what happened
(https://www.hindustantimes.com/india-news/gemini-ai-vintage-saree-trend-shocks-woman-creepy-result-mole-jhalak-bhawnani-instagram-what-happened-101757921488425.html)
[2] Generative Adversarial Network (GAN) (https://www.geeksforgeeks.org/deep-learning/generative-adversarial-network-gan/)
[3] AI Hallucinations by Joann Pena-Bickley (https://joannapenabickley.medium.com/ai-hallucinations-e79e20d1aebb)
[4] Biometrics in the EU: Navigating the GDPR and AI (https://iapp.org/news/a/biometrics-in-the-eu-navigating-the-gdpr-ai-act)
[5] China bans AI-generated media without watermarks (https://arstechnica.com/information-technology/2022/12/china-bans-ai-generated-media-without-watermarks/)
[6] Better protection for victims thanks to new law on sexually explicit deepfakes (https://www.gov.uk/government/news/better-protection-for-victims-thanks-to-new-law-on-sexually-explicit-deepfakes)
[7] To Craft Effective State Laws on Deepfakes and Elections, Mind the Details
(https://www.techpolicy.press/to-craft-effective-state-laws-on-deepfakes-and-elections-mind-the-details/)