Home

Donate

Secure Messaging Apps and Election Integrity

Justin Hendrix / Oct 20, 2024

Audio of this conversation is available via your favorite podcast service.

With Sam Woolley, Mariana Olaizola Rosenblat and Inga K. Trauthig are authors of a new report from the NYU Stern Center for Business and Human Rights and the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin titled "Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation." On the eve of Global Encryption Day, I caught up with them to learn more about how political propagandists are exploiting the features of encrypted messaging platforms to manipulate voters, and what can be done about it without breaking the promise of encryption for all users.

A transcript of this discussion is forthcoming.

Inga Trauthig:

I'm Inga Trauthig, Head of Research of the Propaganda Research Lab at the Center for Media Engagement here at the University of Texas in Austin.

Mariana Olaizola Rosenblat:

I'm Mariana Olaizola Rosenblat. I am a policy advisor on technology and law at NYU Stern Center for Business and Human Rights.

Justin Hendrix:

And the Stern Center for Business and Human Rights, I've had the opportunity to work with in the past, and I am a graduate of the University of Texas at Austin. So feeling some general allegiance to both your institutions at the moment. Looking forward to discussing this report, Covert Campaign: Safeguarding Encrypted Messaging Platforms from Voter Manipulation just released this month. I want to get into some of the details of what you've found in both your qualitative research as well as the survey work you've done and also how you've thought through some of the policy questions around encrypted messaging applications.

I feel like in my own study of these issues, this is just one of those topics that comes around again and again, concerns over encryption, what to do about the types of harms that occur across encrypted communications, applications. You can think around it about 100 different ways. At the end of the day, there are always trade-offs and it seems like you've discovered some of those here. But I want to ask first, and maybe this is a question for you, Inga, since I know you led some of the interview work that went into this report. What are the types of abuse and the types of nefarious uses of encrypted messaging apps in an election context that you've observed in this cycle?

Inga Trauthig:

What we have witnessed and what the propagandists we interviewed have told us is like a broad variety of different types of abuses, and some of them specifically "just rely" on the features that the apps have already. So something like a forwarding feature is still one of the basic necessities and basic features that the propagandists exploit and that works really well on messaging apps. And other features are part of the apps, but they are exploited in specific ways by the propagandists. What Mariana and I have discovered when we were working together, and I conveyed the main insights from the interviews to her, is what really defines 2024 and propagandists and how they use encrypted messaging ads is what one propagandist I interviewed when I was in India earlier this year coined the broadcasting toolkit for messaging apps. And first I thought he meant, "Oh, the broadcasting feature," where on WhatsApp, messaging apps you can send one message to up to 250 something contacts.

But what he actually meant was a system that is not static, but that propagandists and other consultants pitch to politicians or use themselves for manipulative purposes and that they constantly devise. And it has two levels. The first one is you need a solid network of distribution. So you need to be in groups, you need to create new groups, you need to create channels and all of this to constantly send information into the messaging app ecosystem.

And then the second layer on top of this network of distribution is you use these features. I've already mentioned the forwarding features. There's also the story feature, WhatsApp calls, status updates, and bots and other things that are in messaging apps that works really well for propagandists. They can capitalize on that sense of intimacy and directly getting into people's phones and having those messages pop up on their phone screens. But using features that can also promote virality, which is something that we are actually more associating with the open social media platforms like Facebook, Instagram, X.

Justin Hendrix:

What are you learning about how some of these tactics have changed over time? India is a perfect example. The BJP's use of WhatsApp in particular has been relatively well documented by academics and journalists including yourself. What have we learned just recently about how various message apps are being used in an election context? Are there new phenomena that we're seeing?

Inga Trauthig:

So the new phenomenon go partially hand in hand with what we call the feature bloat of also messaging apps. So for instance, using WhatsApp status as stories to get a certain topic or issue trending, not just coming from the accounts of propagandists and their groups, but also getting the users, like other users across the WhatsApp platform to repost something, creating a sense of urgency. So that's relatively new simply because the feature is relatively new.

Similar also with WhatsApp Channels, WhatsApp Channels is a relatively new feature that has been exploited, for instance in Mexico during the election this year where some propagandists managed to get verified channels for reputable news outlets and then they can use these channels to distribute manipulative information. So some of it naturally goes with new features and messaging apps. But in general, one thing I would say as well is one of the propagandists I interviewed stated, "The more people you can reach with one message, the better always," which seems to be basic wisdom, but it is true and it helps a lot of propagandists that so many of the apps like Telegram can have groups up to 200,000 people, with channels basically unlimited.

And then other messaging apps which used to have more confines like WhatsApp, like a couple of years ago, WhatsApp groups only allow 256 people. Now they allow 1,024 people. So these are some of the new things we are seeing, which go hand in hand with how the apps develop, but also how the propagandists are adapting. And there's something interesting here in terms of some of the apps being better than others, reining in some of the manipulation. So for instance, some of the automated attempts by propagandists on WhatsApp in terms of group creation have been clamped down on. Not on other apps, like Telegram doesn't have similarly strict or vigorous efforts and clamping down on it, but you can still create these massive groups. You just need to wait a bit longer to add group members. Instead of every second adding a new group member, you wait like 10 seconds to add a new group member. So over the last year we've seen this ongoing cat and mouse game, and it's still ongoing.

Justin Hendrix:

One of the interesting things about this report is that you've combined both those qualitative interviews in different parts of the world with survey data. Mariana, I want to ask you about what you've learned in the survey in particular. One thing that I know you found is something that we found in qualitative work that we did for a report at Tech Policy Press, my colleagues, Cooper Quintin, Caroline Sinders and others, this sort of disparity between people's perceptions of security and encrypted messaging apps versus the actual function of those applications. What else did you find that is germane to the context specifically of elections and election, mis and disinformation?

Mariana Olaizola Rosenblat:

Right, so Inga discussed the supply side, which is what propagandists are doing to try to pump out their misleading and often harmful content related to elections trying to manipulate people. And the survey looked at the demand side or the recipient side, how do users receive these communications and what effect does that have on their opinions? And so we asked users first of all, "What portion of you, how many of you have received political content on messaging apps?" And we looked at four messaging apps that offer some level of encryption. The four were WhatsApp, Viber, Telegram and Signal. And we also allowed them to state whether they used other apps, messaging apps that is. And a huge portion, 62% of them said, across 9 countries said that they received political content frequently on encrypted chat apps.

And then we asked, "Okay, of those who received political content, how many of you or did you receive that content from people, accounts that you choose to follow and be in touch with or from strangers or both?" And over half of those who regularly received political content said, 55% of them said that the political content came from people or accounts that they do not know and did not choose to follow. So this belies the purpose of messaging apps as services for communicating with acquaintances, loved ones and shows their transformation into more of a social media type service.

And then further we asked, "Okay, of those of you who received political content from strangers, how many found that content influential?" And 52%, so more than half of them said yes, they found that content, political content significantly or somewhat influential. And so this series of questions shows that not everyone is swayed by political content that is spread by a propagandist because again, these are people that the recipients don't know. But they also are open to that information and I think that shows that this is an important topic. It's not just that there's a supply issue, but on the demand side, also users are paying attention and it has an impact on elections.

Justin Hendrix:

This supply-demand kind of equation always seems to be part of the discussion when we talk about political manipulation, misinformation, disinformation, any of those things. I want to dig into that a little bit and maybe figure out a way to get both of you to respond to maybe a kind half-formed question about this which is around user autonomy and what we think of as supply and demand. How do the applications facilitate people getting what they want versus potentially being exposed to manipulation, which are different ways of thinking about what could in effect be in some cases the same set of events? I may have a real appetite for certain forms of political information that are arriving to me via encrypted messaging apps. On the other hand, someone might have quite a vested interest in making sure that I receive those messages. How do you separate out what is manipulation versus what is consumption?

Inga Trauthig:

Yeah, no, I really like the question and I'm going to address it the way that I'm going to tie in the topics and issues that are closest to my heart. So hopefully Justin, that goes towards what you would like to have answered here. So the first thing that comes to mind is that in the interviews we did over the last two years specifically asked for this report, but at our lab we've done studies on encrypted messaging apps for the last several years.

And one web thread that goes to all of these studies is that when we asked interviewees in these cases, not propagandists, who wants to produce the propaganda and want to manipulate folks via encrypted messaging apps, but journalists who use encrypted messaging apps for their communication also to reach readers, et cetera. In Lebanon for instance, or when we spoke to diaspora community members like different Latino communities in the US or Indian Americans who rely on encrypted messaging apps for political discussions with their friend is that several of our interviewees in different form or another would say something along the lines of, "Facebook has been bad enough, but I only check Facebook once a day now or maybe even once a week or maybe even I deleted the app. But that is not an option for me with the messaging app. I cannot delete WhatsApp or Telegram or something from my phone. I need it because that's how my grandma messages me from Mexico. That's how my local community group organizes our Hindu holy festival celebrations, et cetera. That's not an option for me."

So that is ... First thing, it is an impediment because for instance, for Lebanese journalists who are regular harassed via WhatsApp for instance, they need to be blocking numbers for two hours every day or something. But the way the apps currently still are, there are ways how users can protect themselves in a certain way because they're still not algorithmic. So one Indian student in Houston explained to me how he creates his own WhatsApp and he does a lot of muting and archiving chats and unfollowing and immediate blocking as well. So that is something where actually a lot of the users have a bit more autonomy than they have with other apps and I see that as a positive thing ultimately.

And then another point that you draw on, and I think that is something in the American context that Mariana and I have been talking about with regard to this report, but I've also been talking to, I think Justin with you in the past I've mentioned in previous publications is that in the US in particular, there is an added sensitivity to it because WhatsApp is ... Telegram for some communities is vital to their communication inside the community and it also includes politics. So they feel safer discussing politics on those apps than they are on other social media platforms. And that's something where that's just close to my heart because I'm like, if this is a really important platform to talk about some sensitive political issues, I do not want the propagandists to be infiltrating that and manipulating people.

So that's where we need to find the right compromise or just good solutions and I think that's where some of our recommendations also directly play towards. We talk a lot about bottom up, about cooperation and between different sectors like the companies thinking about safe design but also supporting something like media literacy initiatives, keeping encryption to keep the safe spaces. But at the same time, if not everything is end-to-end encrypted, there are actually additional safety measures the company could produce.

Justin Hendrix:

There's a sidebar here in particular about extremist use of messaging apps to mobilize for violence, which is a kind of particular concern I think when it comes to questions around manipulation, of course. Sort of instances of concern around this with WhatsApp emerged from India with the so-called WhatsApp lynchings a few years ago. Since then, of course we've seen various phenomena across the world, including here in the United States where encrypted messaging apps were implicated in conversations and facilitation of extremist violence. Is there anything new here to say? And it seems like you focused in particular on Telegram, Mariana.

Mariana Olaizola Rosenblat:

We know intuitively almost that bad actors, including extremists trying to mobilize for violence are drawn to spaces that provide safe havens or at least the privacy that good actors also benefit from and so need, like pro-democracy activists. An encryption in this sense is a double-edged sword. Absolutely necessary for activists in, for example, my native country Venezuela trying to mobilize in the face of regime repression. But it's also because of the privacy that it affords, it also enables bad actors of all kinds to organize and mobilize there as well.

This is actually the dilemma that I think brought Inga and I to this subject. One of the motivations for this report is that an encryption really presents a dilemma for those who care about and want to preserve privacy, but who also see the need to mitigate abuse. And extremism is not the main focus of our report almost on purpose because it's actually a harder subject, maybe a subject for a future report from us. But in our recommendations we really try to chart a way forward at least when it comes to addressing disinformation that is targeting design aspects and features that won't undermine encryption, but that will mitigate type of electoral abuse.

Now when it comes to extremism, first of all, relatively little is known because again, when it comes to the encrypted spaces, it's very hard to research. One of our recommendations for researchers is they need to work on developing ethical methodologies for studying those spaces, but it's very hard to know what's going on because by design, encryption makes content exchange accessible only for the senders and the recipients of messages, which is why Telegram is often the platform that we can talk most about. One, the CEO's open posture of for so long not caring about bad actors organizing on his platform, but also it's largely public. Telegram, as you've pointed it out in your report, Justin, is the encrypted messaging app that's least encrypted and least a messaging app. And so most of it is public, easier to study.

By contrast, an app like Signal that's basically a pure messaging app encrypted, very robust encryption, encrypts metadata as well, very hard to know what's going on. It's basically a black box. Even the platform itself told us that even if there was some criminal activity happening, they just wouldn't know and wouldn't do anything about it, wouldn't be able to do anything about it. So researchers really don't know much about what's going on inside, but this begs the question, okay, if it is the case, which is likely, again, bad actors are drawn to opaque spaces that provide privacy. If it is the case that they're organizing these platforms, what do we do?

The sidebar that we have mostly poses the challenge and we don't offer solutions because again, our recommendations are targeted towards addressing the problem of electoral manipulation. Again, an easier task actually than addressing the extremism and child sexual exploitation. But those are problems that also require examination. And I think the most I can say is Inga and I may do a project on this in the future and try to come up with some sensible and actionable recommendations that again won't undermine encryption because encryption is just valuable and necessary for the exercise of human rights in many parts of the world.

Inga Trauthig:

Justin, you asked, is there something new when it comes to extremism and messaging apps and especially Telegram that we came across with our report, and I actually think for me there was definitely a learning curve or some insights I gathered from our work, which was more focused around voter manipulation and propaganda and manipulation of public opinion. Also for the extremism side, to start with, one of my motivators to do this report was because I've done a lot of work on terrorism and radicalization and the discussion is very binary when it comes to encrypted messaging apps because you do focus on the really bad actors and I mean their ultimate aims are political violence and terrorist acts. So you do not want them on any platforms, you don't want them to communicate propaganda or anything. So I actually find a discussion or those issues were easier to address, Mariana, than what we are discussing where we have such a strong clash of freedom of speech versus folks who want to manipulate that.

But what I've learned when writing this report and researching our topics and then thinking back about when we were looking at extremists, Islamic State migrated to Telegram after they were more pushed off Twitter now, and this is several years ago now, around 2015 or something, and they're still very active on Telegram. And what I realized when writing our report is that it is just so helpful to put encrypted messaging apps and their political roles into a broader context because there are so many granular and recommendation from policymaker's side and from the platform side that you can clamp down on or regulate or safeguard that would also make life much harder for extremists and terrorists. Because yes, they want safe platforms, but they also want a lot of audience reach because in the end you can only create a bigger movement or start something like January 6th if you do reach a lot of people.

So some of the things that we suggest, which is clamping down on virality and other safeguards that you could implement, but definitely also make it much harder for extremists to exploit the platform in addition to propagandists. And I think that is really important to keep in mind also for people from that space because it isn't like a black and white and a binary, and if you put something into a broader context, you can actually find ways of reining in different bad actors.

Justin Hendrix:

I do want to turn to some of those recommendations. You've already given away this one, the recommendation to policymakers that encrypted messaging platforms should be included in the scope of online platform regulation, but that policymakers should ensure that compliance with that regulation does not entail breaking encryption. I'm sure many of the defenders of encryption and of privacy more generally will be happy to see that among your first recommendations. But you do suggest that policymakers should, on the flip side, really try to focus on transparency and making it possible for researchers to study encryption and study encrypted messaging apps in more depth, as you've mentioned here, that they should support effective bottom-up media literacy initiatives, that sort of thing. But I want to focus in particularly on the recommendations to the operators of messaging apps. I assume that some of them care. I'm not entirely sure about all of them, but let's assume that one or more of them might be listening to this podcast. What's the number one recommendation that you'd make to all message app operators that you think would help counter political manipulation with regard to elections on these services?

Mariana Olaizola Rosenblat:

Number one, probably the most far-reaching, but in my opinion, most necessary and the least likely to be adopted, I don't know if that makes it number one on many fronts is bifurcating the platform. So these platforms, we call them here and in the report, encrypted messaging platforms, but that's a misnomer in certain ways because they're encrypted but to different degrees. And really the only one that's fully encrypted is Signal. Telegram can hardly be described as encrypted. It's mostly a everything app or social media cool messaging app. WhatsApp is approaching something similar. Viber as well. So what we believe is that these platforms, rather than trying to do everything, they should really bifurcate their services. When they offer a messaging service, that private messaging service should be clear to users, that should be protected with end-to-end encryption. It makes sense no, when you have one-to-one or small group conversations that they be private? But when the platforms offer social media type features, broadcasting community super groups, in the case of Telegram groups as Inga mentioned, up to 200,000 participants, those should not be protected with end-to-end encryption.

I mean, in the real world, it's hard to think of spaces where 200,000 people congregate with absolute privacy, without anyone noticing. So we think that platform should be clear about the services they're offering and also be transparent with the users when they're really afforded privacy, when they can count on the end-to-end encryption and when they cannot. So I think we see as a problem the fact that these initially messaging services have expanded dramatically into services for all sorts of communication and in the process, undermining users' privacy expectations. So I think that would be my number one recommendation that would solve a lot of issues and allow us to then implement the other recommendations that are in a way more targeted towards the types of abuse tactics that we've seen from the field. But again, I think this one would be the most far-reaching.

Inga Trauthig:

I completely want to underline everything that Mariana said, and just as a very brief summary, I think the number one recommendation would be not to put revenue making ahead of user safety, which is a general recommendation very often. I would also say for our social media platforms, because as we've seen in our report and our data, a lot of the ways that messaging apps started to be making more money over the last years with premium features, with business APIs and business features, those are the things that are most handy for propagandists and they really, really try to exploit. So don't put the revenue making above trust and safety for the users.

And I think if you do that, then there are also a couple of other features that you would think twice about for a messaging app, for example, increasing the group size number, increasing having those super groups because in my opinion, that is a way how companies want to have the users spend even more time on a specific app. But I don't need a 1,024 group, and a lot of folks we spoke to said they also wouldn't need that. But those feature bloats, status updates, stories, bigger groups, et cetera, I think are ways that the companies are trying to have to user spend even more time on the app and make it a super app or something like that.

Mariana Olaizola Rosenblat:

Yeah. And to follow that, two other features or aspects of the platforms that are in the money making interest of the platforms, but not in user's interest are the ability to create unlimited accounts on a single device. That's the case with Telegram. Signal by contrast only allows one account per device. It's a very strict limit. And WhatsApp and Viber are somewhere in the middle, but there are loopholes that propagandists exploit and companies may not have a monetary interest in curbing their account creation and pace of account creation because they want to report high user numbers. But from the point of view of mitigating fake accounts, phone farms and the market for professional trolls, it makes sense to limit one or at maximum two accounts per device. Another example is the business features that Inga mentioned, so some platform business and premium features, they charge for these. Businesses can reach on WhatsApp unlimited contacts who haven't necessarily consented to that communication by paying for the business API, WhatsApp Business platform. Again, something that gives WhatsApp revenue but not in the best interest of users when they start receiving unwanted communication.

Justin Hendrix:

I want to talk a little bit also about this recommendation around tip lines, and I assume in-app reporting generally. Complicated in the context of encrypted messaging apps, there are privacy issues here too with possibly breaking or having to break the promise of encryption in order to report certain phenomena on these applications in some cases. But what do you recommend here when it comes to users making platforms aware of problems that they're seeing in the wild?

Mariana Olaizola Rosenblat:

Because again, we're talking about somewhat encrypted spaces, not completely encrypted, but somewhat encrypted, we think that users are really in the best position to exercise agency over the types of communications they want to receive. But they can do so if they're given the proper features and tools from the platforms. So one type of user-driven fact-checking tool is tip lines and tip lines don't break encryption in any way. Tip lines are basically accounts created by civil society organizations, fact-checking institutions where they say, "Hey, if you receive anything that you have doubts about on WhatsApp or Telegram or Viber, just send it to us and we'll tell you if this content has been fact-checked," And it's up to the user to send that content. They don't have to reveal who sent it, when, and then the tip line just answers. So user-driven completely and again doesn't break encryption.

There's another possible functionality that one of our expert interviews suggested, which is a one-click reverse image search. Especially now in the age of generative AI-produced images, it would be very useful if users could click on an image that they receive on WhatsApp, for example, and then click in a series of very simple steps, they could see where does this image come from? What's this history of use? There's something that Google offers for example, and wouldn't it be great if it could be integrated into these apps? Right now, tip lines are cumbersome to use, which is why users report almost barely using them. In our survey, we asked users and only 7% of them across the 9 countries said they had ever contacted at tip line. And actually when we looked into which tip lines because we asked them, "Okay, which tip line did you contact?" A fraction of them had actually contacted a tip line that was recognizable. So some don't even know what a tip line is.

But when we asked them, "Would you find useful an option to check the veracity of a piece of content with a fact-checker of your choice?" 83% said they would find such an option. So very lopsided. These are just two features that platforms can enable through technical means. Again, neither of which would break encryption. User reporting, it compromises privacy, but it doesn't break encryption. It compromises privacy to the extent that it's up to users to report something that they receive that they consider illicit or abusive or false, violative of platform policies to the platform. And some platforms receive such reports, others don't. Signal doesn't have a user reporting function. And again, most users in our survey said they would find that useful.

Justin Hendrix:

Is there another recommendation that you think is crucial?

Inga Trauthig:

Mariana has covered almost everything but the one big one, which is also in our recommendations where I just see the biggest potential for future abuse because we have already seen it in this election year. But some of the propagandists I spoke to are more hesitant than others is just have a watch and think about your business expansion of the business features and what you allow individual users and how everything that might be useful in terms of reaching more consumers also will be reaching potential voters in the future. So there's a lot in the vetting processes and then in the follow-up after being verified or vetted that companies should invest in. Just because you verified an account once or you approved a business once doesn't mean it's going to be a legitimate business in the foreseeable future. There needs to be more follow up and more resources going into that so that the business features on WhatsApp, but also Viber for instance, and premium features and Telegram aren't completely taken over by manipulative factors.

Justin Hendrix:

My last question is really about the future and a lot of folks are worried about generative AI and the possibility that we'll see extremely large scale networks of AI chatbots that may be very difficult to discern. Maybe they have their own counter forensics capabilities built in, they're able to set up their own accounts and mimic human behavior and do all the things that would be necessary. What of your report addresses this? Because I know to some extent you've looked at certain platforms like Viber that are literally letting you set up chatbots within the application. What do you think we have to expect when it comes to thinking about this particular problem?

Inga Trauthig:

In the short to medium term, my biggest concerns when it comes to AI and the crossover messaging ads is simply the speediness and easiness of additional content and content creation and how that gen AI hateful content, for instance in India, gen AI images and videos bullying Muslims that could lead to additional WhatsApp lynchings, again, is then spent on WhatsApp. And this is exactly where the encryption and the barely existing or non-existing content and platform moderation that we have on these messaging apps makes this new generative AI very quickly created content spread in the encrypted messaging app ecosystem.

More in the mid to long-term future, things that Mariana and I saw some evidence for, but we are wondering how this is going to play out, is how AI chatbots are being incorporated into all of these apps. And I think there are two concerns I have here. The first one is about how much does that actually circumvent some of the existing end-to-end encryption, if messages and chats are now partially being shared, if AI chatbots become part of the conversation because the chatbots are actually from the company. And the second one is obviously, okay, what type of content on the app can these chatbots then create, be prompted to create, be programmed to create on some of the apps?

Justin Hendrix:

An enormous amount of work that has gone into this report. And I appreciate all the work you do both at NYU of course, which is home for me as well, but also at the University of Texas. I look forward to hopefully seeing another collaboration between these two institutions in the future, and hopefully we'll have the opportunity to talk about these things in future when you have new results to share. Inga, Mariana, thank you so much for joining me today.

Inga Trauthig:

Thank you so much for having us.

Mariana Olaizola Rosenblat:

Thank you, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics