Home

India and Taiwan's Divergent Paths to Combat Online Disinformation

Amber Sinha / Aug 8, 2024

Amber Sinha is a Tech Policy Press fellow.

Background

Since 2018, the Indian government has been engaged in a series of policy measures to exercise more control over internet intermediaries, particularly large social networking platforms. The primary push is to question the safe harbor protections currently available to intermediaries and to make any safeguards conditional on a set of new obligations.

In July 2018, Ravi Shankar Prasad, the IT Minister of India, in a speech in the Rajya Sabha, warned that social media platforms could not “evade their responsibility, accountability and larger commitment to ensure that their platforms were not misused on a large scale to spread incorrect facts projected as news and designed to instigate people to commit crime.” More ominously, he said that if “they do not take adequate and prompt action, then the law of abetment also applies to them.” The minister was speaking in response to the rising incidents of mob lynchings in India, ostensibly occasioned by the spreading of misinformation inciting violence on social media and messaging services.

The culmination of this expression of regulatory intent was the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (IT Rules), which significantly re-orients the relationship between Internet intermediaries operating in India and the Indian regulatory state. Shortly after the notification of the IT Rules, I worked with colleagues at the Centre for Internet and Society to publish a detailed note examining the constitutional validity of these rules and found them wanting on several fronts. Over the last three years, the matter of the legality of the IT Rules has been fought in courts in India, currently as part of a set of petitions, one of which is before the Bombay High Court.

The litigation surrounding India’s Fact Check Unit

As legal challenges over the different aspects of the IT Rules play out, a smaller but more urgent question with significant implications in the election year emerged. Last year’s amendments to the IT Rules authorized the creation of a Fact-Check Unit (FCU). This issue assumed greater urgency in the run-up to this year’s general elections in India, leading to fears about how it could give the government powers to put its thumb on the scale of information available online in ways that unfairly benefit the ruling political party. The operation of the FCU was stayed by the Supreme Court just in time, less than a month before the elections commenced, and remains on hold until the matter of constitutionality is determined by the Bombay High Court.

Now that the elections are over, the arguments in the case are ongoing. This is a good time to look at the issue of FCU, not only in terms of what it means for India, but how to view such measures in a digital public sphere where regulatory interventions to respond to disinformation are increasingly being commandeered by governments to censor or restrict speech.

The amendments to the IT Rules in 2023 established the FCU to identify “fake or false or misleading” online content “related to the business of the Central Government” and demand intermediaries to remove it. This was done through Rule 3(1)(b)(v), which imposed due diligence requirements upon intermediaries. The earlier version of the rule required them to inform users of their obligation to not share any “patently false or misleading information.” The amendment expanded this obligation to make “reasonable efforts” to ensure that its users do not upload or transmit any content that has been identified as “fake or false or misleading” by the FCU. Once the FCU had flagged content as fake, false, or misleading, the intermediaries were required to take it down. As has been the case with multiple obligations under the new IT Rules, a failure to meet these obligations came with the penalty of the loss of safe harbor, the legal immunity enjoyed by intermediaries for third-party content that violates the law.

The rise of disinformation online and its impact on deliberative democracies is among the more consequential socio-political and technological problems of our times. However, a government-controlled body that acts as the primary arbiter of online news about itself is prima facie an act in bad faith, where the government can unilaterally determine what online content about it should be available. In addition to the problematic nature of this measure, it inspires a further lack of confidence as there are no measures to ensure independent, unbiased decision-making; no redressal mechanism that allows the intermediary or user to challenge its decisions; and the scope was left intentionally broad with no attempts to define the phrases ‘fake, false, misleading’ and ‘business of the government.’

A comparison with Taiwan’s ‘whole of society’ approach

To understand where this regulatory intervention falls within the global landscape of responses to online disinformation, it may be instructive to compare it with the Taiwanese government’s measures in the last few years. In Taiwan, a key techno-political challenge in the last decade has been to respond to Chinese propaganda. In Taiwan, the public and government both now view the threats to trust posed by disinformation as existential threats from China and the government has responded with a ‘whole of society’ approach.

One prong of this strategy is to increase transparency with a host of initiatives, including seeking the participation of the innovativeg0v (gov-zero) project. This team joined the government to create the Public Digital Innovation Space (PDIS) and launched the consensus-building project, vTaiwan and its graphic avatar, Polis, which found a structured way for public forum discussions to inform policy decisions. This digital infrastructure was vital in responding to disinformation, where an unconventional team, including graphic designers and comedy writers inside the government, would create memes directly responding to fake news.

The Taiwanese response to disinformation also involves an active role played by the state and raises questions about the appropriate role of governments. One may question why the Taiwanese approach was hailed by the liberal media, while FCU in India has come in for criticism if they both involve state actors responding to online disinformation. There are three differences that are critical here. First, the Taiwanese model of rebutting fake news and creating counter-narratives is in direct contrast to a more authoritarian response that involves censorship and takedown of content in India. Second, the way in which the response evolved in Taiwan was bottom-up, with a citizen and civil society campaign being adopted and further empowered by the government. These concerted efforts can work only in an environment of high levels of public trust. Moreover, by involving more people it only further reinforces that trust. Third, this coordination between the public and the state also needs to be viewed within the broader context of the peculiar geo-political position that Taiwan occupies and the constant existential threats it faces from China.

Conclusion

These competing regulatory models illustrate a divide in technological governance as governments evolve strategies to deal with online harms. An important meta-question that regulators constantly grapple with is what is the appropriate level of intervention that state actors must exercise to ensure their public policy objectives.

The Taiwan example shows how state actors can be important stakeholders in directly combating online disinformation, particularly in the case of information warfare campaigns by an adversarial foreign state. The bottom-up approach and inclusive practices involving active engagement with civil society offer a useful template for other countries, especially those comparable in size and dealing with information warfare.

For countries with much larger and more complex information ecosystems like India, it is much more challenging to adopt a ‘whole of society’ approach. However, the kind of FCU response where a state-controlled body is the sole arbiter of truth for a society’s public discourse is deeply misguided for a nation of any size. One can only hope that the judiciary finds it an unreasonable restriction on the right to freedom of speech and expression.

Authors

Amber Sinha
Amber Sinha works at the intersection of law, technology and society, and studies the impact of digital technologies on socio-political processes and structures. His research aims to further the discourse on regulatory practices around the internet, technology, and society. He is currently a Senior ...

Topics