Home

Donate

Assessing Platform Preparedness for the 2024 US Election

Justin Hendrix / Sep 25, 2024

Audio of this conversation is available via your favorite podcast service.

The Institute for Strategic Dialogue (ISD) recently assessed social media platforms’ policies, public commitments, and product interventions related to election integrity across six major issue areas: platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising, and state-affiliated media. ISD's assessment included Snap, Facebook, Instagram, TikTok, YouTube, and X.

I spoke to two of the report's authors: ISD's Director of Technology & Society, Isabelle Frances-Wright, and its Senior US digital Policy Manager, Ellen Jacobs.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

I'm grateful for the two of you to join me today to talk about elections and platform preparedness. Wanted to just step back for a second, and for any listeners who are not familiar with ISD, tell us a little bit about the organization and how your review of platform policies and preparedness around the US election fits into your overall agenda.

Isabelle Frances-Wright:

ISD is a global think tank with offices in London, Germany, and the US, and also Jordan, and our focus has really historically been on hate and extremism and the proliferation of those harms online, but in more recent years has expanded to look at threats to democracy and election integrity and we also focus very heavily on foreign influence operations. Something that we have been very focused on in the last year is understanding how we can really move the needle on legislation, particularly in the US, given that a lot of other countries around the world have made significant strides when it comes to regulating technology platforms. And it still feels like, in the US, we're a little bit behind on that front, so something we endeavor to do is be able to really robustly evidence some of the harms that we're seeing across platforms and how they are failing to often respond appropriately.

Justin Hendrix:

So in this new report, you say you looked across six major issue areas, platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising, and state-affiliated media. We'll get into a little bit of the detail on each of these, but I wanted to just ask a big picture question to start. I think the narrative that's perhaps common these days around the role of platforms in elections is that their posture is not as aggressive as it had been, for instance, in 2020. So, we had the 2016 cycle, which of course resulted in huge growth in this trust and safety industry, lots of Congressional hearings, lots of scrutiny, as you say, a huge regulatory movement which occurred largely outside of the United States or at least was effective largely outside of the United States. When you think about the posture of the platforms that you covered in this report, do you think that narrative holds? Do you regard the platforms as slightly taking a less aggressive posture this go-round?

Isabelle Frances-Wright:

I think in 2020, a lot of the platforms were caught off guard by some of the narratives that emerged and gained significant traction, specifically around false information, around mail-in voting, misinformation around election results and certification, and I think almost in a response to that, you see a lot more hedging around certain policies where they're almost giving themselves more room to make decisions as they see fit based on what comes throughout the election, which I think is interesting. We're also seeing some backsliding on certain policies that were in place in 2020. And then broadly you are seeing often layoffs within trust and safety, and specifically the election integrity teams, if not disbanding of election integrity teams entirely. And I think part of that is because this issue has become even more politicized than we have ever seen it before.

Ellen Jacobs:

Many of the platforms we spoke to said that they have election working groups, which tells us that they aren't fully staffing trust and safety or elections experts on dedicated elections teams. So, those aren't perennial efforts that they're trying to shore up and ensure that they're ready when the election is coming. And I think that just adds to the layer of us not having a comprehensive picture and them being able to push off any culpability and saying, we have some infrastructure but we don't really have a good insight into how many people are working on that, how often are they meeting, what purview do they have in enforcing any of these policies or really doing any work to ensure that their platforms are setting up responsibly ahead of the elections?

Justin Hendrix:

Another key and important thing that you get into in this report is around the impact of the platforms and their choices on election legitimacy. Of course, in the last cycle we saw a huge push to de-legitimize the outcome of the election, which resulted in violence at the Capitol on January 6th. So, this is a key problem, key question that everyone is concerned about. What impact do you think this divergent approach on election legitimacy has? What should we expect?

Isabelle Frances-Wright:

I think the first impact we will see is a very fractured information ecosystem where a false claim may be permissible on one platform but not another. And I think that will likely confuse voters, maybe push them more towards one platform than another if they feel like one platform is giving them what they consider to be the uncensored truth. Often we see social media platform users, when they are aware of the number of misinformation policies that platforms have by not taking something down or not actioning a piece of false information. That in and of itself is a decision that could lead social media platform users to believe, "Okay, maybe this information is correct given that a platform has a vast number of misinformation policies and a huge fact-checking infrastructure, yet this information is proliferating widely."

So, I think that fractured information ecosystem is really important to think about and take into consideration and I think likely will have an impact not just in what voters believe, but also what platforms they go to for their information and divergences. In a number of different places, when it comes to this type of information, you have divergences across the platforms in terms of what misinformation can be propagated about the 2020 election, what forward-looking information, that is false, can spread regarding this upcoming election, and also divergences as to premature claims of victory, which had, I think, an impact that the platforms didn't expect in 2020. So, I think there's a lot of vagueness across all of the platforms we looked at that will likely lead to some confusion and I think a very chaotic approach to enforcement as we come closer to election day and in the days following.

Justin Hendrix:

Just to make sure the listeners are aware, you looked at TikTok, Snap, the Meta platforms, YouTube, and X.

Isabelle Frances-Wright:

Yes, and for Meta specifically, Facebook and Instagram, so the most used platforms by US voters right now.

Justin Hendrix:

So, I want to ask a question about another topic that you got into which is concerns around AI-generated content, and I see this as maybe connected to the election denialism question. There's been a lot of focus on what AI-generated content might be used for in terms of trying to manipulate the potential outcome of the vote, less focused perhaps on how AI-generated content might be deployed in a context where a large number of people dispute the outcome, in a case where, let's say, Georgia or Pennsylvania or other states where we know the result may not be known for days or weeks, it'll be interesting to see the extent to which AI-generated content is deployed. I want to ask you about that, what did you find when you looked at the measures the platforms have in place to deal with this? This is probably the most new area where the platforms are being asked to essentially take action, often on difficult technical grounds.

Isabelle Frances-Wright:

So, something that has been interesting has been watching the evolution of platforms' policies on AI-generated content over the last few years, but specifically in the last year, what we've seen is a move towards a reliance on labeling and specifically self-disclosed user labeling. And this is something that a majority of the platforms, when you look at their public communications around AI, they really focus on this piece and they lean into it heavily. And I think what it obscures or distracts from is that when it comes to scaled detection of AI-generated content, we really haven't seen any indications that the platforms are anywhere close to being able to embed that within their moderation processes, and they're really relying on either users to say themselves that they have posted a piece of AI-generated content or are waiting for external sources like fact-checkers or the media to debunk pieces of content and then search for that content themselves on the platforms.

And in a piece of research that we put out alongside this election assessment, we looked at a number of AI-generated pieces of content that had been widely debunked by the media and gotten a lot of traction. And what we found was that even with those pieces of content that, again, I will say had received wide media coverage, the platforms were still failing to identify them and take them down. And when you think about that and you think about the fact that it is generally easier to debunk a piece of content that has a prominent political figure in it as opposed to a wholly synthetically-created voter who someone wouldn't immediately identify and say, "This looks suspicious," as opposed to, let's say, Biden wearing a MAGA hat may look more suspicious. So, if the platforms aren't catching the really easy things, it really calls into question how well they're doing on the pieces of content that have gone undetected that are more nefarious.

The Wall Street Journal and Graphica did a great investigation looking at a huge number of videos that had been proliferating on TikTok for a number of months that had gone unnoticed. And what I'll also say on this is where AI-generated content becomes most impactful is during a time-bound event where information cannot be quickly assessed and debunked. And, as you mentioned, that is election day itself and the days following the election, and I think that we really haven't seen the full scale of AI unleashed yet and I think that is when we will see that. And I think given that the platforms are struggling right now, it's definitely a cause for concern as we get closer to election day.

Ellen Jacobs:

Echoing everything that Isabelle said, that is just them identifying the content. I think they're still very much working through figuring out what removal or actioning would look like on their end too, and our analysis of the policies that the platforms have put out around AI-generated content, it's really confusing, and that's coming from experts who often work in this technical language and know what to look for in patterns in past policies that the platforms have implemented. And what you see here is, whether unintentional or intentional, the layering of multiple sets of policies, so many use election-specific policies, many use AI-generated or manipulated media policies, then these often overlap with misinformation or false information policies that the platforms may have.

And then on top of that, they're also using standards that aren't necessarily clear, such as egregious harm or significant harm, which we respect the fact that they're very difficult issues to parse and I think we're all grappling with the effects of what AI-generated content at a mass scale looks like for all of us in these spaces, but I think even within the policies themselves, they should be actioning certain things, they haven't quite figured out what those standards look like and I think we're really concerned about that in the context of upcoming elections as well.

Justin Hendrix:

There's one particular heading in the report that really stood out to me. It's "preventing violence and hate does not seem to be a key focus area of the platforms in the context of the election." I've myself, along with various co-authors, written about the need for the platforms to do more to prevent their use as platforms to organize hate or violent events. Talk to me about what you found here, what's changed since 2020 and why do you regard the platforms as still not on the right posture when it comes to violence?

Isabelle Frances-Wright:

Taking a step back beyond the US, 2024 has obviously been a year of a number of critical elections around the world, and looking at this on a global scale, we've seen an incredibly alarming rise in political violence. In Ireland earlier this year, looking at the local elections and the EU elections, of the number of violence attacks on candidates or harassment, we found that 50% had an online element. You had also this year in Slovakia, there was an assassination attempt on Slovakia's prime minister. This was the first attempt on the life of the European head of government since 2003. So, we are seeing this rise in political violence all over the world throughout elections, and then of course we have seen two assassination attempts already on former President Trump.

So, putting all that into perspective, when you look at the platforms' external communications around elections, and generally it is the blog post they put out explaining what measures they are taking to safeguard platform and safeguard voters who use the platform, political violence is almost never mentioned. The focus is generally on election misinformation, fact-checking, media literacy partnerships to support their fact-checking initiatives and it's almost like the incitement to violence and extremism piece is, in their mind, placed in a different bucket.

Whereas now what we are seeing is that they are unfortunately very intertwined and there are a lot of ways in which I think platforms could proactively seek to turn the temperature down and depolarize using product features that they don't necessarily take. I think generally it's almost pushed to the side when you look at their public communications on election efforts and I think that needs to change and it really needs to be brought front and center.

Justin Hendrix:

I do just want to distinguish, you point out in your comparative review, all the platforms have policies around violence and incitement to violence. So, maybe just to press you a little more on what's missing, you point out, for instance, a lack of clarity around what external experts they're partnering with, maybe how they're doing threat analyses, the extent to which those types of partnerships are publicly disclosed or we know what types of signals they're drawing essentially, what else gives you a reason to say essentially that you don't feel the platforms have the right posture on these issues?

Isabelle Frances-Wright:

Beyond the issues that they highlight in their external communications, what we see time and time again is often the issues with the platforms are not with the policies themselves. Often, if you read the policies, particularly on issues around violence and extremism, the policies seem reasonable. What the issue generally tends to be is detection and enforcement, and, again, I think this is a shift that we've seen since 2020 and prior, which is that the platforms were generally a lot more transparent in terms of partnerships they had with external experts, what those looked like, how they were actually being integrated to detect signals.

And now I would say we really don't know who is being utilized to help the platforms combat these issues and how they're being utilized. And I think part of that may be because the platforms have less of these partnerships or they are less integrated and that's something that they maybe don't want to draw attention to, but given the amount of platform failures that we've seen in this issue area, relying on external experts and using external expertise I think is absolutely critical in order to be able to combat these types of threats.

Justin Hendrix:

You've already mentioned transparency and perhaps the lack thereof, that's one of the places in your comparative review where I see a lot of nos those to the questions like will the platform be releasing findings on identified disinformation networks on an ongoing basis throughout the election cycle? Clearly this is a big issue. A lot of researchers have been raising the alarm for months and months that we don't have enough data access, we don't have enough visibility onto these platforms. Is there anything that you'd say there?

Ellen Jacobs:

Just across the board, there is not enough transparency, there is not enough data access for researchers like those at ISD or other organizations who want to be able to look into the black box of what's happening at the platforms to identify what decisions are being made that lead to some of the harms we're seeing emanate from the platforms, especially compared to our peers in the EU who have the DSA where they have as part of that regulation mandated transparency that allows them to see key metrics decisions and access the data themselves to be able to do that research on their own and better inform policy decisions.

We've been tracking the unfortunate closure of CrowdTangle over the past few months and that's just the latest in a long line of tools that the platforms are either restricting access to or implementing more barriers that most researchers, even if they're now eligible for tools such as the Meta Content Library, can't normally access. So, we believe that the US definitely needs some sort of regulation or legislation that would mandate this transparency and we're really supportive of Senator Coons' Platform Transparency and Accountability Act, which would set up a similar scheme for certified researchers to be able to gain access to data from the platforms that we think would be a really great step in beginning to close this huge gap that we're facing in terms of data and transparency from the platforms.

Isabelle Frances-Wright:

We in civil society and academia talk a lot about researcher access and researcher transparency, but something that we note in this report is the lack of transparency to users, and specifically voters, in the context of an election. So, many of the platform's disclosures when it comes to disinformation operations are very detailed, but when they come out three months past an election, it really doesn't help voters understand the information ecosystem that they're dealing with in the run-up to the election. And also when it comes to how platforms are interpreting many policies that are often subjective, we really don't have any insight there. They'll often produce, prior to the election, during the campaign season, numbers of how much content they've taken down under different misinformation policies, but we don't really know the content that they're taking down, what narratives that they have decided fall under an election misinformation policy versus a incitement to violence policy. And I think that's something that voters really deserve to know.

Justin Hendrix:

I myself have recommended in different documents in the past that the platforms need to be almost operating in realtime, maybe running in some kind of press conference war room where they announce especially important decisions they've taken, take-downs or content moderation decisions, it seems very important to preserve trust. If you think about, for instance, the decision to limit the proliferation of the Hunter Biden laptop story in the New York Post, which Meta's Nick Clegg had to answer for again in the senate intelligence hearing last week, these decisions can reverberate, take on a life of their own, in that case it's become a kind of conspiracy theory amongst some on the right, even evidence that the election was somehow stolen or rigged and that the platforms played a role in that. So, I agree, I think there needs to be a great deal more transparency in that regard.

Isabelle Frances-Wright:

Yeah, and that's the problem when you're making these very subjective decisions behind closed doors, it does often lead to people feeling as if they are being censored one way or another, and you are also then unaware of the precedents that are being set within the platforms around certain policies. And it also makes it more confusing to social media platform users when you have a critical, I would say high visibility moment, where a decision needs to be made, and platforms sometimes instead of relying on their own existing policies are essentially writing policy on the fly, executives, senior executives, are the ones making the final policy call when for any other instance it would be trust and safety experts who have deep expertise within a specific area like elections or extremism. And it just leads to a lot of confusion and questioning whether people are feeling censored by a platform for a certain reason or political ideology.

Justin Hendrix:

I assume that transparency also includes another category around political advertising, even though we've got some ad libraries, we've got some data available from some of them, you point to many deficiencies there as well.

Ellen Jacobs:

Yeah, absolutely. It's all part of the picture too. I think we were talking about election denialism earlier and AI-generated content, and ads are another area where there are overlapping and often confusing policies about what is restricted or prohibited there. And in many cases, AI-generated content may not be restricted at the same levels in ads as it is in regular user generated content, which is a considerable problem if you're worried about the harms that misleading or deceptive information can have for users.

Justin Hendrix:

I just want to point out to the listener that in one of the documents that comes in this package of reports, there is a comparative analysis that's a set of tables, questions about how the platforms perform on different measures, things like information integrity or political ads or hate speech or the extent to which they have the appropriate resources in place. And this is not a scientific judgment, but scanning this document, it looks to me like one platform appears to stand out from the others as taking a more proactive approach, and that platform is TikTok. Would you agree with that assessment, that it appears that TikTok seems to be taking the most aggressive approach going into the 2024 election?

Isabelle Frances-Wright:

Yes, and I think that Accountable Tech also did a piece of analysis earlier this year which also had TikTok taking the most aggressive approach. And something else that was interesting was Yoel Roth, when asked the question of which platform is leading when it comes to election integrity, he responded with TikTok, and I think that they have certainly placed a lot of resources within trust and safety and more broadly within the company into elections, and I think part of this is likely given the immense amount of scrutiny that they face in the US, and around the world, but specifically in the US and feeling the need to maybe outpace some of their competitors in these areas. But with all that being said, there is still certainly a lot of room for improvement.

Justin Hendrix:

I want to ask one last question around ephemeral content, this livestreaming and other content that's posted that disappears, things of that nature, what did you learn about your inquiry into ephemeral content, its role in election communications, and whether the platforms are prepared to police it?

Isabelle Frances-Wright:

So, again, this is an area that almost seems like an afterthought when you look at many of the platforms' policies and public communications around safety. Often there are no specific call-outs for live stream or ephemeral content like Stories and you're simply relying on the legalese within terms of service agreements where platforms state, "Our community guidelines apply to all product features." We're seeing, certainly when it comes to livestream, an increase in that being a key tool for voter communication. I think very specifically on TikTok, you've seen a rise in popularity of political debates that often stream for hours and hours. On X, I think a rise in popularity now in X Spaces, their audio live stream.

And what research has generally shown time and time again is that the platforms do not police these product vectors particularly well. And one of the benefits to platforms is that this is a very difficult area to research, given the time intensity that would go into conducting this type of research, and also just the fact that Stories, for example, our ephemeral and you can't grab them more than 24 hours later. So, in a way, I think a lot of their failures in enforcement have flown under the radar and there's probably a lot more egregious content on these spaces than is being reported. And that's something that as we see this rise in these product vectors we need to be paying attention to.

Justin Hendrix:

You mentioned that you operate in many geographies and you've been looking at these questions across multiple jurisdictions. The report does reflect on what's changed in Europe, in particular under the Digital Services Act, its election integrity guidelines, which were released just a couple of months prior to the elections there. Do you think there is a meaningful difference right now in how the platforms respond in the US versus in Europe or is it still too early to tell?

Isabelle Frances-Wright:

I think we have already seen signs of a difference in response, one of the examples being ad libraries. So, now that ad libraries are mandated within the EU, you have certain platforms who have ad libraries there but not in the US. And having those ad libraries in the EU has shed light on a number of platform failures, ad disclosure failures, foreign state influence operations utilizing ad products. When you see how regulated transparency there is shining a light on issues, which then presumably the platforms have to go and mitigate, you are already starting to see this divergence in approach.

Ellen Jacobs:

I'll add another really striking example is the stress test that the regulators in the EU are empowered to have and mandate the platform's participation in to ensure readiness ahead of the elections, which there was a Senate intel hearing last week where several of the platform CEOs, those that chose to show up, because they didn't have to, came, and what you saw there was lawmakers asking questions or evidence from the platforms that they said, "We would like to get from you," but we all know that the platforms don't really have to send them that information and are not motivated or incentivized to send that to them, especially if they know they're under-regulating or not putting enough resources into those areas. Whereas there's so much more empowerment and ability to enforce some of those policies in other jurisdictions such as the EU.

Justin Hendrix:

These elections, I've come to regard them as live experiments in the relationship between tech platforms and democratic processes. I suppose we've got another live experiment underway now. I appreciate you going to such trouble to lay out the parameters of that experiment and try to pre-register the extent to which the platforms are ready and perhaps we can come back together for a debrief when this thing's all said and done and see how they did. Ellen, Isabelle, thank you so much for joining me.

Ellen Jacobs:

Thanks so much Justin.

Isabelle Frances-Wright:

Thank you Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics