India’s Proposal for Age-verification Is a Blunt Response to a Complex Problem
Amber Sinha / Mar 27, 2026Amber Sinha is a contributing editor at Tech Policy Press.

A girl is seeing a mobile phone in Mumbai, India, on December 1, 2022. (Photo by Indranil Aditya/NurPhoto via AP)
The Grok non-consensual undressing incident earlier this year appears to have accelerated the conversation around implementing age-based restrictions to access to online content, in particular, social media platforms. As of this date, Australia and Malaysia have laws on the books that impose age-based restrictions on social media use. Several other countries, including Germany, France, Denmark, Pakistan, Turkey, the UK, Spain, and New Zealand, are preparing or considering similar legislation.
Although India has moved relatively slowly on the issue, recent developments suggest increasing regulatory momentum. Within the last month, state governments in Andhra Pradesh and Karnataka have announced that they will ban social media access for children under the age of 13 and 16, respectively, though no legislation has yet been introduced in either state legislature. Perhaps, more significantly, recent statements by Union IT Minister Ashwini Vaishnaw, including at the India AI Impact Summit 2026, signal a shift toward stricter, nationwide age-based restrictions for social media and online platforms.
According to a report in the Indian Express, the government is considering a graded approach with a different set of restrictions for those under 12 years of age, those between 12 and 16, and those between 16 and 18. Under such a framework, regulatory oversight will follow a sliding scale, systematically easing as a minor matures. According to sources familiar with the discussions, officials are weighing several specific interventions to enforce these age-banded controls, including nightly “curfews” to block late-night logins, hard caps on daily screen time akin to models seen in other jurisdictions, and mandatory maximum-privacy defaults for younger demographics that can only be overridden via authenticated parental consent. Additionally, the ministry is mulling over prohibitions on platform design elements deemed addictive, such as autoplay, infinite scrolling, and targeted push notifications for younger cohorts.
The privacy risks inherent to age verification
What the above proposals lay out is how access to online content may be regulated but they depend fundamentally on how the age of an internet user is determined. Age verification is the process of predicting or confirming an individual’s age on the internet. Typically, this relies on digital identification systems that establish a user's identity to determine their age and then control access to online services based on eligibility.
In theory, age verification can be implemented in relatively privacy-preserving ways using techniques such as zero-knowledge unlinkability and unobservability. Such a digital age-assurance model would use advanced cryptographic techniques, such as Zero-Knowledge Proofs, to allow individuals to verify their age eligibility without disclosing their birth date or any identifying information. By generating a unique mathematical proof for every interaction, the system prevents service providers from tracking user behavior or correlating different sessions.
Critically, the architecture is designed so that even the system operators or government entities are technically barred from seeing which websites a user visits, ensuring total browsing privacy. This "blind" verification model protects users even when interacting with high-risk or malicious platforms, as no retrievable personal data is ever shared with the website, leaving nothing for hackers to exploit.
However, this theoretical privacy-preserving model is rarely implemented in its entirety. Most age verification systems rely on intrusive methods because they are cost-effective and allow service providers to collect valuable personal data. In the absence of strict regulation, companies—particularly social media platforms with poor privacy track records—tend to adopt the simplest, most data-intensive implementations. A position paper by my colleagues at the EDRi network provides an analysis of the key methods deployed for age verification and the associated risks they pose.
Perhaps more significantly, even the most privacy-preserving systems present a slippery slope. Their protections are inherently fragile, where a minor technical change in the future could easily bridge the gap between anonymous age verification and a user's true identity.
Traditional offline age checks are naturally privacy-preserving, requiring only a brief in-person verification with no lasting data trail. These checks would usually be limited to very specific age-barred activities such as gambling, purchasing alcohol, etc. In contrast, digital age-verification systems, such as those under the UK’s Online Safety Act, seek to mandate verification for a very wide range of online activities.
This shift represents an unprecedented level of surveillance that lacks the inherent privacy of physical ID checks. They threaten to strip users of the essential benefits of the internet—such as breaking social isolation and finding communities—without providing any reliable proof that these measures actually solve the harms they aim to address.
Aadhaar and the downsides of age verification
As with much of India’s digital policymaking, proposals for age-verification ultimately converge on Aadhaar, India’s centralized and leaky biometric digital identity system. The Indian Express report mentions that a central part of the strategy involves the Unique Identification Authority of India (UIDAI). To address privacy concerns, the government is proposing a system of disposable virtual tokens. Under this approach, users would not share their Aadhaar number directly with websites; instead, the UIDAI would issue a temporary, one-time token that confirms you meet the age requirement.
The reports suggest that the token may be destroyed after the transaction, ostensibly preventing the government from tracking which specific apps or websites you are accessing. However, this assurance rests entirely on the current technical design choices that could be changed in the future or easily amended by regulation, rather than making such protections mandatory in governing legislations like the Digital Data Protection Act.
Even if such design safeguards are implemented as described, they do not fully address a deeper concern, that once the underlying infrastructure is in place, it will only expand further.
While the immediate discussion in India appears to be limited to access to social media platforms, India has a long history of going down the path of mission creep once a digital technology is introduced for one purpose. Already, as this discourse grows, a parliamentary panel has recommended that KYC-based user identification and age-verification mechanisms to enhance online safety for women and children be introduced for social media, dating apps, and gaming apps. The discussion around using Aadhaar demonstrates this mission creep better than any other example, where a digital identification solution originally introduced and justified on the basis of a narrow mandate of subsidy distribution is now likely to be used for online access, effectively ending online anonymity in India.
This expanding scope raises not only technical and policy concerns but also constitutional ones. In the 2020 case of Anuradha Bhasin v. Union of India, the Supreme Court of India ruled that the right to use the internet is a fundamental component of the freedom of speech and expression guaranteed by Article 19(1)(a) of the Constitution. Although the ruling primarily addressed government-imposed internet blackouts, it established a critical legal precedent: any measures that limit online access must be necessary, legitimate, and proportionate to the intended goal.
Moreover, policy discussions around technical solutions often overlook how the systems interact with everyday realities. For instance, mobile phone data being used to profile an applicant can often be misleading in India, as mobile phones are often shared or handed down between members of a family. The way age-gating may be implemented at the level of devices would risk locking out adults or require repeated verifications.
Finally, designing solutions around parental consent ignores entrenched social dynamics within many South Asian households. Being able to freely access online spaces goes a long way in addressing the gender digital divide. In a reoriented system, where repeated parental consent is needed, families might react by completely taking away smartphones and computers from girls. Rather than narrowing harms, age verification measures may then deepen the gender digital divide, effectively limiting young women’s access to education, economic opportunity, and social advancement.
Authors
