Home

When Wonderlands Become Rabbit Holes: The Urgent Need for Online Child Safety Regulation in India

Aditi Pillai / Jul 10, 2024

What if Alice following the White Rabbit down the rabbit hole in Alice in Wonderland had not been an accident, but had in fact been orchestrated by the White Rabbit? Replace the Rabbit with social media platforms and a troubling question emerges – are social media platforms incentivized to lead children into rabbit holes of harmful content in the interest of profit? And what implications would this have for the children of a country like India, where accountability concerning online child safety is stunted by lumbering legal machinery and culturally misplaced ideas of privacy and consent?

Through an exploratory study, I was able to evidence and document the nature and volume of harmful content on YouTube and YouTube Kids – particularly harmful content in the vernacular, as part of Citizen Digital Foundation’s work. The study revealed that children are indeed exposed to direct and indirect harms on these platforms, in the absence of parental intervention and supervision. This phenomenon is more pronounced in countries like India with little regulatory oversight. While my research focused on harmful content on YouTube and YouTube Kids, I would like to take its findings forward and examine if and how policy and platforms in India fall short of facilitating a safe learning and entertainment experience for children online.

The consequences of the lack of a robust children’s data protection law which clearly delineates, defines, and classifies harms to children can be particularly dire in the Indian context. Further, siloed approaches to policymaking lacking interdisciplinary stakeholder input have reduced the effectiveness of many policy measures. While existing regulations like the Protection of Children from Sexual Offences Act (POCSO) 2012 and the Digital Personal Data Protection Act (DPDPA) 2023 address certain aspects of child safety, platforms use loopholes in these Acts to circumvent any criticism levied against them. Sometimes these loopholes themselves take away from their intended purpose. For instance, an effort to protect minors and their data was made in Section 9 of the DPDPA 2023 which largely places the onus on parents to provide consent to their children’s activities online through ‘verifiable parental consent.’ However, this overlooks some practical challenges, such as parents needing to provide consent multiple times, among others.

There is also a need for India-specific platform regulations that account for lower digital literacy rates and a wider gendered digital divide. A 2023 study by Space2Grow, a non-profit organization which has done significant work on online child safety issues in India, revealed the following – (1) only 30% of children, 35% of parents and 26% of educators are aware of digital safety; (2) and only 3% of parents in India use techniques to mediate safe online experiences for their children. While platforms like Instagram and YouTube have implemented child safety features and parental control mechanisms globally, their effectiveness in India remains to be seen. In my paper, I explore how content moderation often fails to account for India’s linguistic diversity and sociocultural nuances. Further, existing age-verification systems in India are also easily circumvented. Similarly, another 2023 study by The Quantum Hub, a public policy think tank, revealed that 82% of children reported that their parents sought their help to navigate online platforms and 80% of children in India use shared devices, which increases the chances for children to be exposed to inappropriate content. Research such as these must be factored into policymaking processes to ensure that regulations benefit end-users.

The advent of the attention economy has also had unprecedented impacts on online safety as a whole in India. In developed economies, platforms focus primarily on sophisticated targeting mechanisms and maximizing engagement within markets that are already saturated with similar technologies. However, in India – a populous country with a rapidly advancing digital infrastructure – large international corporations are incentivized to tap into the vast market potential. While this may align with India’s aspirations for technological advancement and economic growth, it often comes at a significant cost wherein the push for user acquisition and engagement can lead to the extraction of resources from Indian citizens, and sometimes exacerbate existing concerns related to issues like data privacy and misinformation. As for children, this has a different result. Platforms employ engagement-driven algorithms, recommendation systems and feeds resulting in the proliferation of harmful content that receives more engagement. Some of the consequences of this engagement-driven model have already taken root among children around the world.

In 2023, Meta was subject to much public scrutiny when it was revealed through an unredacted lawsuit that the CEO Mark Zuckerberg ignored executives who “called for bolder actions and more resources to protect users, especially kids and teens.” The same lawsuit brought to the fore internal documents on long-term retention which stated, “The young ones are the best ones. You want to bring people to your service young and early.” The documents reveal Meta conducted extensive research to understand the brain functioning of youngsters to increase the usage of its platforms. As noted by Nicki Reisberg, a board member of the Alexander Neville Foundation and host of the podcast “Scrolling 2 Death,” what was most troubling was evidence of the researchers’ apparent intent when they noted, “the teenage brain happens to be pretty easy to stimulate.” Combine this with Meta’s report in June 2024 that Facebook has been attracting its highest number of young adults in three years, and it becomes apparent that the threat to children on such platforms is significant.

This pattern extends beyond Meta to platforms like Snapchat and a plethora of Indian social media platforms that are not subject to as much scrutiny as their global counterparts. In India, children and other vulnerable groups bear a much higher cost for either regulatory lapse on privacy protection aspects or an overly strict approach, where platforms are neither incentivized to safeguard children nor penalized for neglecting to do so. Furthermore, when platforms do implement online safety or privacy measures in India – resulting from intense public and regulatory pressure in developed economies – these actions are often lauded as proactive self-governance and are not questioned further.

The power to implement effective safeguards lies with platforms, policymakers and researchers. However, the Founder Director of Pacta, a law firm and policy think tank that engages with social sector organizations, highlighted at a recent panel on online child safety that funding for research in this space is lacking in India. She said that, unlike the market-driven imperative for EdTech research during the pandemic, there has been little incentive for studies on online child safety in India, which are often resource-intensive. Consequently, this dearth of India-specific interdisciplinary research also impacts policy development.

Ultimately, online child safety must transcend market incentives and regulatory pressure and be a fundamental and non-negotiable priority for platforms. As India continues to formulate its digital regulations, the time is ripe for the country to prioritize children’s well-being and deploy regulatory measures stemming from interdisciplinary stakeholder collaboration across sectors. By placing child safety at the forefront of technological progress and innovation, India can set a new global standard for responsible innovation and technological development.

Authors

Aditi Pillai
Aditi Pillai is currently a researcher at Citizen Digital Foundation, a non-profit working at the crossroads of technology and society in India. Her work focuses on the intersection of policy with emerging challenges in online child safety and AI governance in the global south. Her recent research p...

Topics