India is the world’s most populous democracy, and also one that is facing challenges. This week we focus on the Indian government’s efforts to create a bureaucratic apparatus to enforce what appears to be an ever more frequent number of requests for social media platforms to remove content deemed inappropriate for one reason or another.
And for this week’s episode, I’m joined by the author of a recent piece on this subject, Angrej Singh, who is interning with Tech Policy Press this summer. Angrej helped to pull together the panel of experts– all based in India– that you’ll hear from today, including:
- Neeti Biyani, Policy and Advocacy Manager, Internet Society
- Tejasi Panjiar, Associate Policy Counsel, Internet Freedom Foundation
- Apar Gupta, Executive Director, Internet Freedom Foundation
Note: The Internet Freedom Foundation recently submitted an analysis of the IT rules and a set of recommendations on platform governance towards establishing a regulatory framework that follows democratic and constitutional principles.
This podcast discussion touches on a range of issues, including:
- The political context and regulatory precedent for the 2021 rules implemented by the Ministry of Electronics and Information Technology and the recent proposed amendments to them.
- The dangers of weakening encryption with the goal of allowing
- The challenges and dangers of establishing government bureaucracies to manage social media content moderation and ‘grievances’ that arise.
What follows is a lightly edited transcript.
So, I want to put a broad question to the group and each of you can think about how you might appropriately like to answer it. You know we’re going to talk today about questions around free expression, social media, India, these new IT rules, the amendments that have been deported about them. But I wanted to start just with a bit of context. A lot of my listeners of course, are in the US or in Europe, they may not be so intimately familiar with Indian affairs. How would you characterize this moment in Indian politics? What is the broader context that we’re working within when we look at these questions around Internet freedoms in India?
Thanks, Justin, I’ll take this up. So, India in its place right now, I think that a good empirical basis for people to understand where India is right now is just looking at the various indices, which are being released quite often, which are pointing towards a rapid decline in media freedom or democratic rights. And this is matching with a growing amount of digitalization. So, over a decade, there’s been a tenfold increase in the number of Indians connected to the Internet. There’s close to 600 million Internet users, a large number of them are on online platforms, but at the same time, you notice a worrying trend in terms of decrease of freedoms and liberty, which is very essential to a functioning democracy such as India.
I can supplement that with maybe a broader social political context as well. India’s seen a very rich history of where it used to be and where it’s headed. And we’re one of the most populated countries in the world, right? But that also means that we have a very varied, diverse demographic. And we have a lot of cultures and people from all walks of life, we have one of the highest spoken number of languages in the world. So, it’s a very interesting fabric that India represents. And it’s also one of the most dynamic, geopolitically strategic regions of the world. And until quite recently, India had a strong sense of upholding democratic values and freedoms, but very much like Apar said, we are moving closer and closer to our security and our privacy being questioned, especially as the lines between our online lives and our lives off the Internet are being blurred.
And along with that, we do see a stepping up of state surveillance and the motives behind why all of this is happening, maybe we can get into as we talk further, but completely agree with Apar, we’ve been through quite a big transition in terms of where we started a few years ago and where we are right now, with more ubiquitous use of the Internet, and people getting online, and information and data getting digitized.
It’s been now more than a year since the ministry put forward these new IT rules. And I might just ask a contextualizing question about those as well, that they didn’t come out of nowhere. What do you think was the mindset of the government in crafting these rules? I mean, I’m sure they didn’t set out to say, we want to create problems for free expression, we want to cause issues with digital rights. There’s a broader background. I mean, I felt in reading them a year ago, that there was a sense that they mimicked in some ways, some of the rules that had been around media ownership in India for some decades, that kind of thing. So, I don’t know if anybody can characterize where the IT rules came from, the stew of ideas and influences that led to those.
Thanks, Justin. The IT rules in the present avatar in 2021 do build on past mistakes, failures, and missteps. And I think India’s not alone. I think this is a large question on where does regulation lie on the Internet is still being discovered, I think better by others than by us. Now, having said that, the IT rules have a long and rich history in terms of being a point of contention, even being litigated. I was one of the lawyers in the case, in which the rules were challenged in their previous version, which were made in 2011, that’s called the Shreya Singhal case.
The Supreme Court actually gave a judgment on it and said that the Safe Harbor Framework, which is under section 79 of the IT act, under which they have been made and only provide a process for notice and take-down, needs to be recognized as being activated only when actual knowledge is there with any intermediary, which means that there’s a government notice or a judicial order rather than a user complaining to a platform, because that would be a content moderation practice. That wouldn’t be actual knowledge in terms of the immunity going away if the platform refuses, they download this from the government, those are distinct use cases.
Now, what’s happened over a period of time. The technology has become much more attached to our daily lives, the number of Internet users has grown in India, the social impact and the individual rights of people are impacted by online social media, primarily that’s the focus of the IT rules even now. And there’s been a reactionary attempt looking at various issues, which emanates from a techlash to certain issues of nationalistic, chauvinistic nationalism that these are Silicon Valley platforms. And you find the iteration which comes through, which makes the IT rules within this notice and take-down framework, more and more severe.
Now, what happens is that in 2019, I think towards the end of 18, they’re put up for consultation and there’s a lull for about two years, then online streaming platform in February in 2021, there’s a show which is on political and religious teams. And certain people feel that online video streaming platforms such as Netflix or Amazon Prime are being given too much liberty, there’s no regulatory framework. And they’ve sent a series of emails to the Ministry of Electronics and IT, which makes these rules under the notice and take-down framework for intermediaries.
Now, acting under that, the Ministry of Electronics and IT with the Ministry of Information and Broadcasting makes these rules, which have three parts. And the first is definitions. The second is on social media companies and intermediaries, but also now applies to messaging platforms, specifically weakening encryption. And the third part now applies to online digital news media as well as online video streaming. And it is open to question how, within a notice and take-down mechanism, you are regulating the last category.
I’ll conclude this with one sentence. It’s that the IT rules today, essentially if you look at it, not from a much more regulatory or legal lens, are essentially the principal regulatory instrument which is governing the experience of most Internet users in India.
Could I add to that, maybe? I want to take a couple of steps back from where Apar started. I think he’s added the meat that the bones needed, especially the context in which these rules came about, but I just want to take a couple of steps back and talk about exactly what the intention of this government is. It’s very clear from the various conversations stakeholders have had with the government that the government wants to regulate content available on the Internet.
Now, the first thing to understand about how the Internet functions is that it is a layered infrastructure. The infrastructure and layers of the Internet have nothing to do with the content available on the Internet. So, a really good analogy that I like to use is plumbing. If you need water in your house, it’s important to have piping, and taps, and all of that groundwork ready for you to receive fresh, clean water when you demand. I like to think about the layers of the Internet like plumbing and those faucets available in your home.
Sure, there are conversations happening across the world about whether big tech should be regulated, whether big tech should be broken up, the ills that big tech is perpetuating, but I want to say two things to that. Most of the things that we see amplified on a lot of social media platforms that governments across the world, including India want to regulate, these are socio-political issues that existed way before the Internet. It’s not like child sexual abuse did not exist before the Internet, right? It’s just become an issue for which technology and tech platforms become an easy scapegoat.
The second thing I want to say is if you want to regulate content, firstly what that means for free expression and freedom of speech is a whole different conversation. And I don’t want to get into it right now, but what that means for content moderation is that very often governments in their endeavor to moderate content online end up knowingly or unknowingly impact the infrastructural layers of the Internet. And that’s exactly what Apar talked about.
The moment you try and regulate, say end-to-end encrypted platforms, you are giving up not just individual security, but you’re also risking national security as the encryption that’s available on our smartphone is linked to the encryption that keeps all of us safe and that keeps our critical infrastructure safe. You’re targeting the privacy and rights of millions of users on the Internet. And there is absolutely no assessment that’s been conducted before these rules were notified. Has there been an impact assessment conducted? Or has there been an open multi-stakeholder consultation process? No.
And I think these are the things we need to think about, that the government needs to talk to multiple stakeholders who can inform these processes, can feed into these processes, and make them robust, and ensure that they don’t have effects like risking the privacy and security of millions of people. And with extraterritorial effects, it’s not just limited to Indian borders.
Just to add to that, I think the IT rules just before they were brought in a very rushed manner, I would say, the one event that really, I would say, just ignited all of this was the backlash that was received against the Amazon Prime video pep series that Apar was talking about. And because a scene in the show was labeled as religiously offensive and consequently, the makers of the show did modify the show, but what came after that was just a lot of requests to the ministry to regulate online content.
And again, this also came from large requests to regulate fake speech on the Internet and the harmful effects of fake speech. But like Neeti was saying, a lot of these rules came without public consultation, didn’t follow that transparent process that it needed to, as a result of pitch, a lot of these concerns, which maybe was the intent behind releasing these rules, doesn’t reflect, for instance, the compliance report that is released by all of these social media platforms, don’t release numbers on the hate speech content that they’ve taken down. This clearly is a loophole, clearly reflecting, a not well-thought out process, it’s a reflection of a process that was not transparent and was not well thought out.
And I suppose, instead of maybe doing the multi-stakeholder process and the process that you would like to see had occurred, instead you’re seeing these amendments put forward, which would appear to give the government more powers, create even more of a bureaucracy around this question. Can someone, I don’t know who would like to do it, just for the sake of my listener, these amendments and this grievance committee for instance, that it creates, would appear to create, what is happening here? What can we expect to see occur? What type of bureaucracy gets invented in India to manage questions of social media content and its appropriateness?
Justin, I think a good way to look into the future is to see how the government has regulated other forms of media, because they’re viewing the Internet also as another conventional media, such as radio or broadcast television. And here India has had government committees, which have been established. Firstly, these government committees lack independence and autonomy, they’re composed of bureaucrats who are doing the day to day work, which is directed by the ministers who help these specific ministerial departments. And the committees by themselves are not then staffed. And it’s important to note by people who may be trained in law, in aspects of media freedom, in essentially assessing the merits of a claim itself.
Another thing which will happen is that, given that they will be appointing people who are not having training, they are not having proper procedures. They won’t sit on a day to day basis, they won’t determine them on a day to day basis. So, when you take this existing institutional apparatus and just plant it for essentially hearing appeals of people who are dissatisfied with the content moderation decisions of a platform, which not only includes a notice and take-down request, such as a user filing a complaint that please take-down somebody else’s content. It may also include the platform itself removing content of a user where the user will be able to appeal to the government. But again, the government, not as a judicial body, but essentially as an extension of a ministerial office.
The share number of Internet social media users in India and who access social media platforms is nearing 300, 400 million for a platform such as Meta Facebook or for WhatsApp also it’ll be applicable in case their disabled account will be somewhat similar. In terms of YouTube I think so, a lot of people in India use it as a search engine, but some people may still have logins. But if you just take Facebook as a representative example, I think they’re about… We were looking at the transparency reports, about 30 million pieces of content removed every month, about three lakh, something like that, according to my calculation.
But even if a small fraction of them are appealed by a user to Facebook, and they’re not happy with the decision with the content moderation, and then file a appeal further before this grievance appeal committee, which is what is proposed under these new rules, what ends up happening is that the ministry, with lack of capacity, without training, without having clear rules for transparency, that will publish its orders, gets close to thousands of appeals on a conservative estimate on a monthly basis from just one platform. Think about all social media.
Now, in a situation like that, of course it won’t be able to adjudicate everything, there won’t be any practice guidance within the rules, how it picks and chooses cases, anybody such as a Supreme Court in the United States, which does pick cases on the basis that what will be the impact of law, and it’ll then arbitrarily pick and choose cases. So, you’re seeing failure at multiple levels in terms of ensuring even the promised outcomes under these regulations that the government wants to ensure that there’s a free and fair digital ecosystem, which is one of the claims that this will ensure that the arbitrary decisions of platforms will be checked, right? I don’t think so. That’s just possible, given how it’s been done in the past for the television or the radio sector in India, how it’s proposed under these rules.
Adding to what Apar has already said, the GAC, yes, has multiple concerns and there are multiple levels. He already spoke about the infeasibility and how impractical the creation of such a body is, given the large volumes of appeals that are filed against content moderation decisions taken by intermediaries. And we also have to remember that there’s a very contracted timeline now for intermediaries to address these grievances, so that’s the impracticality of such a body.
Now, coming to the constitutionality of such a body, it essentially does not have any legal basis, because such adjudicatory bodies can only be constituted by the legislature. And if, in a case, these bodies are constituted through a subordinate legislation, it can only be done so if the legislature permits the executive to do so. The IT act does not contemplate or permit the union government to appoint an adjudicatory body to decide on permissible content. What this does essentially is makes the… Through the creation of the GAC, the bureaucrats essentially become arbiters of our online free speech.
And also this executive constituted committee, basically what will happen is if, once they start, like Apar said, they can choose content, because they can choose content that has not been raised by users, right? They can do so on their own as well. What this will lead to is it will incentivize social media platforms to suppress any speech that may not be palatable to the government. For instance, controversial political content, right? So, that is another way it can really significantly harm users rights.
You fear that it’ll simply be picking and choosing its cases, not based on a structured effort at creating precedent, but rather at pursuing its own ends. Is that right?
We can go back to the IT rules, prior to these amendments, right? And the Bombay High Court and Madras High Court had stayed particular sections of the IT rules, particularly rule 91 and rule 93, contained in part three of the IT rules, which subjected any content that was published by publishers of digital news media or OTT platforms to government oversight. Now, these rules were stated for the very same reason we’ve critiqued the proposed creation of GAC, is that they make the government appointed committee the arbiter of permissible speech, which could sense for contents or grounds that are outside the article or extraneous to article 19 of the constitution without providing any procedural safeguards to protect fundamental rights of citizens. So, there is a very real possibility of ambiguity in enforcement and absence of regulatory clarity that will emerge due to core challenges to the constitution of the GAC.
Also the GAC, the draft amendment to the IT rules, only define the GAC and talk about the functions it will undertake. The rules that will talk about, say what the composition of the GAC will be, what the qualifications to be a member of the GAC will… All the rules that will define these factors, that will have a significant impact on how independent or autonomous the GAC is actually, is yet to be released. So, considering that, as well as why we’ve retained the functioning of the GAC, because it can lead to arbitrary enforcement, also because of the assured volume of appeals arising because there are millions and millions of social media users in India.
So, I’m curious to know whether there is any public concern over these IT rules and how do they impact the public, right? And what are the implications for regular folks? And what’s at stake for them?
So, I think for me, the most concerning clause contained in these IT rules is the mandate to identify the first originator on end-to-end encrypted messaging services. As I said, our lives online and offline are blurry. There’s no binary lives we lead. And everything that we do is intertwined with the devices we use and the ways we connect with other people over the Internet. So, right from pictures of my two-week-old nephew to my very sensitive, personally identifiable data, starting from my national identification documents to my transcripts from school, I have all of this information online. There’s very little about me that I don’t have online, either in a cloud space or on a messaging platform, information that I may have shared with my friends and family.
Now, anytime that someone wants to talk to me, exchange personal information, get some information about me or vice versa, we are going to turn to an end-to-end encrypted messaging service. The few end-to-end encrypted messaging services that are used most commonly in India include Meta’s WhatsApp, Signal, and Telegram. And the functionality, or rather the technology that they use, basically means that nobody except the sender and the receiver can access the information shared on a chat. Or if it’s a group chat, then it’s extended to those people in that group.
What the government suggests is that an end-to-end encrypted messaging platform needs to either use some way to tag the information of the person sending a particular message on the platform and include that with their identity, or use some hashing mechanism against which they should be able to match the message sent. So, basically if I were to send Apar a message saying “Hi there!”, that is going to generate a particular hash that’s to be very different if Apar sends me a message saying “Hey there!”, even though the meaning of these messages is the same.
So, before we get into the technicality of how this happens, let’s just say that if this identification, the first originator of information on end-to-end encrypted platforms is enabled, which is in any case not feasible, it’s not feasible to use any of these methods that I mentioned to enable this identification, it’s going to be devastating for security reasons, for privacy reasons, and it’s going to break the most key technology that keeps us safe online – which is encryption, especially end-to-end encryption. So, I’m going to stop there and turn to Tejasi and Apar.
You asked whether these rules were critiqued in India. I would say that these rules were critiqued internationally. So here, I just like to first, before I go into encryption, definitely one of the most concerning provisions, I’ll speak about one more concerning provision. But before that, I like to quote a statement from three special repertoires of the United Nation, who wrote to the union government after these rules were notified. And I quote here: “these roles do not meet the requirements of international law and standards related to the right of privacy and to freedom of opinion and expression.”
Also, they’ve been condemned by international digital rights organizations who have experience in global platform regulation policies and practices. Access Now has stated, and I quote: “new rules expand on alarming human rights infringing measures”. And the Electronic Frontier of freedom has said, and I quote: “these rules threaten the idea of a free and open Internet built on a bedrock of international human rights standards”. So, when these rules were notified, obviously it received a lot of criticism. And one particular, very prominent feature that was new here, I would say, there were fresh classifications, mainly social intermediary and significant social media intermediary.
These were two new classifications and the threshold for a social media intermediary to be considered as a significant SMI was 50 lakh or 5 million registered users, so that was a threshold. Now, these categories bring a high level of government discretion in data mining, which platforms need to comply with, what regulations, and what level of regulations. Also there, within that provision, there’s rule six. Under rule six, the government may, by order, require any intermediary to comply with obligations imposed on an SSMI. So, if it satisfies the threshold of a material risk of harm. Now this threshold, a material risk of harm is very, very big and it enables the central union government to enforce discriminatory compliances.
I’d just like to add to Tejasi that in addition to the parts of the rules, which have been condemned in international forums, there have also been challenged in close to 17 cases in different High Courts, which are now being sought to be clubbed and transferred by the Supreme Court. Now, within these 17 cases, the primary part, which has been challenged, has been part three of the rules, which deals with the regulation of online media platforms, which may be something like a scroll point wire or something like the Daily Beast or HuffPost in the United States, as well as online streaming websites, such as Netflix and Amazon.
Interestingly, they have not been challenged by any online streaming platform, but the challenges by themselves have resulted in court orders in which different High Courts have in fact, issued injunctory relief against the application of the rule, not only against a specific party, but we have a thing called public interest litigation, in which parts have been stated against publishers, digital news media platforms, specifically on part three. And in addition to that, I think one other High Court, moderate High Court, has indicated that the power under the rules itself, which is created in part two, with respect to certain social media regulation, can cause a chilling effect.
Now, all of this is important to consider, given that these rules by themselves, when they have been criticized internationally or have been litigated and have been injuncted in a limited manner by courts, are now being sought to be amended. One would imagine that the amendment would be to cure these legal deficiencies. However, none of the amendments seek to address it. The amendments in fact, and civil society response seems to be that it further undermines digital rights. So, it’s actually doubling down on injuries to privacy and freedom of speech and expression. And I think that’s a very worrying sign just in terms of the intent of rule making on platforms in India.
Could I also jump in very quickly to say, completely agree with Apar, I think when I saw that there have been amendments made to the IT rules, I was quite hopeful. And then, there was this one moment when, as an organization, we saw that there had been amendments and then, half an hour later or 40 minutes later, the amendments had been pulled down. And then the government came back with those amendments and as Apar said, none of these concerns raised by civil society, technologists, and digital rights organizations had been addressed. In fact, what the amendments do is add more ambiguity to the IT rules because there are very, very broad, unspecific amendments added to the IT rules.
And what they essentially do is that they make private actors such as social media intermediaries liable for a lot of things, including respecting the fundamental rights of Indian citizens. And in my mind, that’s the same as asking private spaces to conduct surveillance for the state, which is technically not legal or appropriate. And it is not the responsibility of a private actor to comply with something of this nature. So, that the amendments don’t don’t do anything to uplift any doubts around the IT rules.
So, now I’m so curious about what have you guys observed about these platforms as a response to these new rules? Have there been any backlash? Or has there been any pushback from platforms?
The platforms’ behavior in India, I think they’re not homogenous block, but largely they also are. But they essentially conduct most of their work in India through industry bodies and they file their submissions through them. Secondly, the ministry itself doesn’t make the submissions public, so we don’t know, even though the consultation has been concluded, some public consultation, what has been their official response on record. However, at the same point in time, we do know that platforms are not happy with the IT rules, the changes which are being proposed, but I think they focus more in terms of any press reportage has been around the compliance window or the criminal liability provisions.
So, it’s more in terms of, yes, we’ll comply, but we’ll have a problem with it because ultimately there are businesses, right? And in India, if you just go by the Wall Street Journal’s reporting on what resulted in a certain partisan bias in Facebook’s moderation practices in India’s Public Policy Chief, Ankhi Das email actually says that, sorry, we can’t do it because it would impact our business operations and the safety of our employees in India.
And I think that gets to the root of the matter right there. So, I think platforms in India, and I want people to really think about it, especially who’re in Europe and the United States, they may not take the same policy positions they take in those countries. So, for instance, Facebook may have supported net neutrality in the United States, but actually was wanting to roll out as a free basics, zero rated platform in India. And it’ll be the same with other companies as well, which operate large businesses in India because India, to some degree, also is not the country in which they are primarily developing their headquarter, they’re open to higher degrees of public scrutiny, accountability pressure by the engineers, etcetera, etcetera, etcetera.
Maybe we can ask that question to get more specific, in terms of the Twitter’s lawsuit yesterday, I mean, clearly this is, as far as I’m aware, the first big pushback from a platform against some of the recent behavior that some of the take-downs and the trouble they’ve got in with different civil society groups and journalists. Do you expect more of these legal challenges from the platforms? Are you observing them beginning to maybe take a… Or at least in the case of Twitter, what do you make of it, is slightly more aggressive stance?
So, well, in Twitter’s case, the context is that Twitter’s report really moved the Karnataka High Court, challenging the illegality of the government’s content take-down auto. Now, this is following Twitter’s actions played last month where it withheld several tweets and accounts in response to legal requests made by the government. And again, if we go back to 2021, early 2021, this is in the context of the farmers’ protest. Twitter was ordered to block a large number of accounts that were posting content about the farmers’ protest. Twitter had blocked, I think, these large numbers of accounts on February 1st, 2021. And in the afternoon, and by evening they stored those accounts, following which Twitter did receive warnings by the government of criminal proceedings in case of non-compliance. So, a lot of these platforms are complying with these rules because there is a consequence to non-compliance, which is they could lose their Safe Harbour immunity and there could be criminal proceedings initiated against them.
But we have to see to what extent they’re complying. For instance, if you look at Facebook, we know that in the compliance report, they talk about how they’re taking down 95% of the fake speech content or hate speech content rather, 90% of identified hate speech content, whereas the Frances Haugen disclosures clearly reveal of where Facebook internally admitted to only taking down 3 to 5% of the hate speech content. Similarly, in the case of Twitter, while it has been in the past years complying with these take down orders, it has now pushed back and basically is seeking clarity on these government take-down orders.
So, because of the criminal and the Safe Harbour losing the immunity because of the consequences that come with non-compliance, platforms have been complying with these rules. But it’s yet to be seen, especially after Twitter bringing this case against the government, what follows. And I mean, it’s difficult to say if such aggressive, if you can classify it as aggressive, is followed by other platforms.
Just two observations. I think in my mind, the first big platform to move to court against the government would be WhatsApp because they filed the first lawsuit against the traceability mandate, and then soon after Facebook filed its preemptive lawsuit against the government citing the same things, even though at that moment and even now, Facebook does not have universal end-to-end encryption on its Messenger, so that was a preemptive lawsuit.
These two lawsuits came very close to each other, in terms of chronology, but India is a huge market, so it is difficult to predict the behavior of large platforms, especially as they try and negotiate with the government, and try to see what flies and what doesn’t. And completely agree with Tejasi that it’s going to be one day at a time when we’ll have to see how platforms respond to the government’s take-down notices or the developments that happen in the IT rules because these rules are nowhere close to being finalized. There are still, as Apar mentioned, a bunch of lawsuits against IT rules, which are pending against the Supreme Court now, so we’ll just have to see, I think.
If there are policy leads or senior executives from these platforms listening to this podcast, what would you want them to understand about the current situation? What would you want them to do? What would you hope that they would do in this context?
Justin, thank you so much for placing that question. I think modern platforms or people who may be in government, I think there’s a larger community of engineers who are deeply passionate about values of Internet freedom, about the promise of technology, not only within the United States, much for globally, to improve the lives of people. They see digitalization as something which helps bring financial inclusion, support children’s political rights more broadly. I think there is a conscience to people there.
And to a lot of these people, I would say, that pay close attention to India because India right now… And there’s a lot of comparison between India and China, where we’ve been pitted as neighbors and drivers, and much more recently, possibly the Chinese being an inspiration for India in terms of it wanting to control the Internet more. But I think we are moving more towards a regulatory climate of threat, intimidation, control, and centralization of power, which we’ve seen in Russia and Turkey. And that’s what I would want people to do, just pay closer attention to what’s happening in India. And I think by itself that may lead to a lot of positive action, criticism, and thoughtfulness around what products are being built, what policy proposals are being placed in international forums.
So, Justin, on behalf of the organization I represent, we believe in the power of community. I think we are stronger together and I think there is a real opportunity to amplify the messages that we’ve heard today. The Internet has made wonders happen. It’s changed our lives and it’s such a great force for good. And the Internet should not be confused with Facebook. I’m not saying Facebook is not an essential service for a lot of small businesses and individuals, what I’m trying to say is that the Internet is essentially an infrastructure. It facilitates so much more than just conversations. It facilitates business, health services, critical infrastructure. There’s no end to what the Internet facilitates.
And going back to what I said about those layers of the Internet, I think what we need to be extremely sure, within this community that I’m mentioning, that those layers of the Internet should not be manipulated, they should not be risked, and they should not be interfered with. The Internet’s a great, decentralized, permission-less space that helps us innovate, helps us all come together. And therefore, the content layer of the Internet must be separated in people’s minds from the infrastructure layer. And while every government is free to regulate big tech and large social media platforms, intentional or unintentional consequences for the Internet as an infrastructure must be guarded against.
Just adding to what Apar and Neeti had said that I’m focusing my suggestion to people sitting in these social media platforms, policy makers, is that a lot of these conversations around these rules and also about social media platforms and intermediaries in general, has been around censorship or content take-down. So, a step to that promotes transparency will go a long way. And a very specific example of that is that, so far, there has been a lot of openness when it comes to the process or the algorithms that’s followed by them for proactive take-down of content.
So, some platforms like Facebook and Instagram, they use machine learning technology that automatically identifies content, whereas Google relies on automated detection processes. WhatsApp has been the only messaging platform, I would say, that has released a white paper that discusses its abuse detection process in detail and it discloses how they use machine learning. So, while WhatsApp has made an attempt there on how it proactively takes some content, I think the lack of human intervention, in terms of how it is monitoring these kinds of content and that is being taken down, is problematic. So, a further step towards transparency will go a long way.
Thank you to each of you for joining us today.
Thank you for having us, Justin. It was a great conversation.
Thanks. Thank you for having us.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.