Platforms and Election Management Bodies in 2024-25: A Tale of Dramatically Mixed Results
Amber Sinha / Apr 9, 2025Amber Sinha is a contributing editor at Tech Policy Press.

Pro democracy march in Berlin, Germany, before the election of the European parliament (June 8, 2024, Leonhard Lenz, CC0, via Wikimedia Commons).
Last month, large platforms such as TikTok, Meta, Google, and X were invited to take part in an election stress test organized by the Romanian regulator Ancom ahead of the Romanian national election set for May. These elections are a reprise of last year’s election, the result of which was canceled by a court amid evidence of campaign finance violations, social media manipulation, and alleged Russian interference. Ancom told Euractiv that the purpose of the stress test was “to test the capacity of those involved in the electoral process and to address potential online challenges that may arise during the election period.”
The stress test in Romania follows a similar evaluation carried out in late January in the run-up to the German elections. Germany’s Federal Network Agency announced it was “well prepared, with tasks assigned to the national authorities and communication paths to all relevant players in place” against potential interference in the elections. This was an assessment made based on a stress test carried out by the agency involving Big Tech giants such as Google, Microsoft, Meta, X, LinkedIn, Snapchat, and Tiktok, the very first example of such pre-election procedures in line with the Digital Service Act for any national elections (the European Commission also conducted a similar test before the EU elections in 2024).
These exercises are exemplars of muti-stakeholder collaborations for ensuring election integrity, which are made possible by involving domestic election management bodies (EMBs) and digital technology regulators, civil society actors, and social media platforms. EMBs can take a variety of different forms, but in many nations, they are the entities responsible for stewarding elections and thus a key democratic institution.
If seen in isolation, these successful stress tests, at the end of a long election mega-cycle spanning elections and referenda in over 80 countries since the beginning of 2024, suggest an optimistic outlook for how large platforms may have worked with EMBs to analyze, assess and mitigate systemic risks emerging from the use of their platforms. However, a more thorough look at the depth and breadth of engagement of platforms with election management bodies and other local regulators in this election megacycle suggests a different story.
EMBs and Big Tech
Starting from the last major global election cycle in 2019-20, ad hoc, voluntary arrangements between large platforms and local EMBs emerged as a mode of co-regulation for hate speech, disinformation, and other forms of harmful online content aimed towards compromising election integrity. Along with Digital Action and the Global Coalition for Tech Justice, I conducted a review of elections worldwide in 2024 that suggests agreements between EMBs and platforms, which established a direct line of communication between the two and imposed some positive obligations on platforms, emerged as the primary—and in many cases, only—form of election-related online content regulation.
These agreements largely lacked legal force, leaving substantial portions of platform operations outside of regulatory control, including online political ads. While platforms did introduce overarching election policies, they failed to develop adequate localized procedures, notably in addressing linguistic diversity and regional vulnerabilities.
In India, while the Election Commission of India's (ECI) Voluntary Code of Ethics requires political parties and candidates to disclose social media advertising spending, they fail to adequately address ads purchased by supporters. In some small measures, platforms have attempted to fill this gap by enforcing their terms of use, requiring disclaimers on paid ads, removing non-compliant ads, and maintaining public ad archives. However, these platform-driven measures are inconsistently applied and are still insufficient in identifying all political content and actors. Critically, these self-imposed rules completely overlook surrogate advertising, where funding comes from sources other than the political party or candidate.
In some instances, such as in South Africa, the outcomes were mixed. While the Electoral Commission of South Africa (IEC) was able to get commitments from Meta to combat misinformation, freedom of information requests by the country’s data protection regulator to obtain records from Google, Meta, and X were unsuccessful. These records pertained to the classification of elections, risk assessments of South Africa's electoral integrity, and the application of global policies within the country. The regulator's requests were denied, leaving limited regulatory options. Similarly, when LRC, a law firm, submitted information requests to Meta, Google, and TikTok regarding their election action plans—seeking details on content moderation and emergency tools relevant to the South African elections—all three platforms refused to provide information, claiming that South African access to information laws do not apply to them. Similarly, KPU, the EMB in Indonesia, was unable to secure formal agreements with Big Tech companies.
In Africa, the Association of African Electoral Authorities formulated the Principles and Guidelines for the Use of Digital and Social Media in Elections, but it has yet to start informing the actions of EMBs in managing elections. During Nigeria's elections, the electoral body, INEC, sought help from social media companies to fight false election information. While Meta offered some support, including local language moderators, they couldn't handle the massive spread of misinformation. False posts about polling changes on Facebook and fake AI-generated videos showing fabricated results on TikTok damaged public trust. Encrypted messaging apps like WhatsApp and Telegram offered no assistance, leaving INEC vulnerable to misinformation spread on those platforms.
Brazil’s arrangement with platforms represented some of the most robust (and controversial) examples of active collaboration. The Superior Electoral Court (TSE) led one of the most extensive initiatives by signing memorandums of understanding with platforms like Meta, TikTok, Google, and others. These agreements established specific procedures for identifying and removing harmful content, including disinformation, hate speech, and electoral manipulation. A key feature was the TSE’s 24-hour takedown policy, which mandated the swift removal of flagged content. Meta took additional steps by bolstering its local moderation teams and partnering with fact-checking organizations, resulting in the removal of large volumes of disinformation related to political figures and electoral processes. Google also reinforced YouTube’s regulations, enhancing its capacity to address election-related misinformation. YouTube adjusted its search algorithms to prioritize credible sources, helping users access accurate information during the election period.
The obligations for platforms towards election integrity in the Digital Services Act in the EU are an exception to the ad hoc and largely non-binding arrangements discussed above. The stress tests during the EU and German elections and the investigations into the role played by Tiktok in the Romanian elections represent early examples of how these binding obligations and punitive measures against platforms, particularly Very Large Online Platforms (VLOPs) and Very Large Online Service Engines (VLOSEs), may play out.
Uneven protections dictated by market size and political intent
The situation highlights the significant geopolitical influence held by a few Big Tech companies, as their global presence often forces them to navigate the regulatory and enforcement frameworks of individual nations. It appears that the ability of national institutions to enforce their laws against these platforms is increasingly tied to the size of their domestic market. This is most evident in how some election management bodies were able to easily establish co-regulatory agreements with platforms during elections.
Beyond the US and EU, other major social media markets like India and Brazil found it relatively straightforward to engage platforms and secure their involvement in frameworks designed to regulate online content related to elections, including content removal, moderation, transparency, and accountability. Even where the governments enjoyed a strong negotiating position—in India, for example—the lack of political intent was a clear hindrance in translating that leverage into an effective regulatory response.
Authors
