Home

Senate Intel Hearing: Key Questions on Election Protection for Meta, Alphabet, Microsoft, and Adobe

Liana Keesing, Jamie Neikrie, Justin Hendrix, Ben Lennett / Sep 16, 2024

WASHINGTON - JULY 12, 2023: Chairman Mark Warner (D-VA), left, and vice chair Marco Rubio (R-FL), right, talk before the start of a confirmation hearing. (Bill Clark/CQ-Roll Call, Inc via Getty Images)

The 2016 and 2020 US elections saw unprecedented levels of foreign operations targeting American voters, candidates, and election discourse. On September 18th, the Senate Intelligence Committee will hold a hearing with executives from Meta, Alphabet, Microsoft, and Adobe to discuss their preparedness to address these threats in the upcoming election. This hearing comes on the heels of recent hacking operations by Iranian actors targeting the Trump, Harris and Biden campaigns, as well as Russian influence operations uncovered by the US Justice Department. In a national survey conducted by Issue One (N=1,500; September 3-9, 2024), 54% of Americans are extremely or very concerned about foreign groups from places like Iran, China, and Russia using social media to influence the election (see further results in the appendix).

In advance of the hearing, Issue One and Tech Policy Press organized a virtual forum with a group of experts on national security, tech policy, and election administration — most of them members of Issue One’s Council for Responsible Social Media — to discuss potential questions for lawmakers to pose to the executives. The forum included the following experts:

Participants:

  • Jamie Neikrie (moderator), Legislative Manager for Technology Reform, Issue One
  • Justin Hendrix (moderator), CEO and Editor, Tech Policy Press
  • Alix Fraser, Director of the Council for Responsible Social Media, Issue One
  • Ben Lennett, Managing Editor, Tech Policy Press
  • Carah Ong Whaley, Director of Election Protection, Issue One
  • Francisco “Cisco” Aguilar, Secretary of State, Nevada
  • Dean Jackson, Principal, Public Circle Research and Consulting
  • Farah Pandith, Adjunct Senior Fellow, Council on Foreign Relations
  • Isabelle Wright, Director of Technology and Society, Institute for Strategic Dialogue
  • Jiore Craig, Director of Digital Integrity, Institute for Strategic Dialogue
  • Liana Keesing, Campaigns Manager for Technology Reform, Issue One
  • Megan Shahi, Director of Technology Policy, American Progress
  • Michael Rogers, Former Director, National Security Agency and US Navy Admiral
  • Nicole Tisdale, Former Director, National Security Council and US. Congress
  • Nora Benavidez, Senior Counsel and Director of Digital Justice & Civil Rights, Free Press Action
  • Yaёl Eisenstat, Senior Policy Fellow, Cybersecurity for Democracy at New York University

Establishing the Role of These Companies

Big Tech companies, including their executives testifying before the Intelligence Committee at this hearing, regularly tout their commitments to election protection and civic integrity. Ahead of the 2020 election, Meta emphasized the company’s “responsibility to stop abuse and election interference on our platform,” while ahead of this year’s election, the company said that “no tech company does more or invests more to protect elections online than Meta.” Google promised that it would dedicate significant resources to developing new tools and technology to help identify, track and stop” malign operations by government-backed or state-sponsored groups.

Congress has held over 40 hearings with tech companies to better ascertain the role these companies play in our everyday lives and the way their actions shape our interactions on- and offline. Despite these inquiries – and although the vast majority of the general American public wants social media companies to be more accountable to their users – there remains relatively shallow information-sharing by tech and social media companies to outside stakeholders about how they conduct their companies and commitments to consumers.

Everyone:

  • What do you see as [Meta/Google/Microsoft/Adobe]’s role in securing American elections? What do you consider to be your responsibilities?
  • How will you balance the need for rapid public notification of threats with the potential risk of inadvertently amplifying foreign disinformation campaigns?

Microsoft:

  • As part of the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, Microsoft has made organizational commitments to safeguard elections by countering harmful AI-generated content, but reports indicate that only half of these have been initiated. Can you explain why certain commitments have been delayed or neglected, and what steps your organization is taking to ensure that the remaining commitments will be implemented before this year’s election?
  • In 2023, Microsoft announced the creation of a Campaign Success Team designed to help political campaigns navigate cybersecurity challenges and AI-related risks. How many campaigns have engaged with this team, and what specific results or feedback have you received from their involvement?
  • Microsoft also committed to launching an Election Communications Hub to support election authorities in the lead-up to elections. Can you describe the current status of this hub’s implementation and how many authorities have utilized it thus far? How has the Election Communications Hub helped election authorities address major security challenges?

Adobe:

  • How are you monitoring the use of your products in the context of elections, particularly for malicious activities like deepfakes or manipulated media aimed at disinformation? Are there mechanisms in place to track and mitigate the misuse of Adobe’s tools for election-related interference?

Coordination with Intelligence Community and Federal Agencies

In the Supreme Court case Murthy v. Missouri, the Court affirmed the importance, as well as the constitutionality, of information sharing relationships between tech companies, the American intelligence community, federal officials, and election administrators. In the Court’s opinion, Justice Amy Coney Barrett went so far as to indicate that platforms should be free to regularly communicate with outside experts and officials on content-moderation issues.

In spite of this ruling, however, increased scrutiny continues to be applied to platform communications with external actors, and in particular government actors. Platforms have increasingly cordoned off their communication with outside stakeholders, which flies in the face of years of previous coordination and encouragement for dialogue during breaking news moments, such as elections, violence, and other rapid crises.

Everyone:

  • Are you regularly communicating with federal agencies regarding threats to election integrity? How has the frequency of those conversations changed since 2020? How has your organizational and staffing approach to maintaining these relationships changed? How important are those relationships to protecting our national security?
  • How are you working with other key stakeholders, including government agencies, civil society groups, and international partners, to ensure coordination on election security? Can you describe your "whole of society" approach to securing the election?
  • What do you see as your role in the ODNI notification process, particularly when it comes to alerting the public, and especially vulnerable communities, about foreign interference operations throughout the election cycle?
  • What support do you need from federal agencies and the intelligence community to adequately prepare your teams and platforms for the election? Are there tools that you need that you do not have?

Meta:

  • Meta’s transparency reports indicated that communication with federal agencies about foreign interference was curtailed due to legal challenges. Since the Supreme Court ruling in Murthy v. Missouri, has Meta fully resumed contact with these agencies? What kind of information is being shared now and how regularly?

Specific Threat Scenarios and Post-Election Plans

As we approach the 2024 US elections, it is important that tech platforms are prepared to address specific threats not only in the leadup to the election, but in the period between election day and inauguration day when elections officials are working hard to deliver accurate and timely results, and when our country is most vulnerable to political violence (as we saw on January 6, 2021). In responding to specific threat scenarios, tech companies can also draw lessons from the elections that have already occurred around the world in this historic election year.

Everyone:

  • If state actors disseminate hacked materials on your services in order to damage a presidential campaign, how will you respond? Will you label such content? Will you remove it?
  • Days before the 2023 Slovakian election, a deepfaked audio clip — allegedly featuring a conversation between a journalist and a leading anti-Kremlin politician — spread rapidly on social media and may have benefitted the pro-Russian populist party. How will you prevent our adversaries from proliferating viral deepfakes immediately before November 5th?
  • In the event of potential civil unrest following the 2024 election, similar to what occurred after the 2020 election, what specific plans or protocols does your company have in place to manage disinformation, coordinated threats, or other online activities that could fuel violence or instability?
  • How do you plan to mitigate violence, particularly in the post-election period, in coordination with federal officials? What about with state and local officials?
  • How are you prepared to combat and notify users about potential foreign interference efforts in the post-election period, particularly if there are disputes or delays in election results?

Google:

  • In a reversal of a policy put in place after the 2020 election, Google said that it would “stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 election.” What factors or instances did you consider when making this decision? What change in election denial content have you seen on YouTube as a result?
  • YouTube has a policy against content that spreads false claims regarding the eligibility of political candidates. How do you verify content related to these claims, and what systems are in place to ensure timely removal of misleading videos before they can influence voters?
  • In cases where old footage is misrepresented as current, what mechanisms does YouTube use to identify and remove these misleading videos? Has YouTube seen any trends in the use of misattributed content to spread election misinformation?

Meta:

  • In 2020, Meta implemented break-glass measures to address foreign interference in elections. Are these same measures being deployed for the 2024 election across Meta’s platforms? If not, how do current preparations differ?
  • What is Meta’s plan to monitor Facebook Groups for violence and the organization of domestic terrorism? Given the role of Facebook Groups in helping foment some of the violence that emerged on January 6th, how will you ensure protections are in place to mitigate potential threats within these groups during the upcoming election certification period?
  • In the 2020 election, Meta faced criticism for its handling of a high-profile content moderation decision involving the New York Post’s Hunter Biden laptop story, as well as concerns about material potentially related to foreign government hacking attempts. What specific measures has Meta implemented to prevent a recurrence of such issues in future elections? Additionally, how will Meta ensure clear and transparent communication regarding high-profile content moderation decisions moving forward?
  • In a reversal of a policy put in place after the 2020 election, Meta announced that it would allow political ads on its platforms that falsely question the outcome of the 2020 US presidential election. What factors or instances did you consider when making this decision? What change in election denial advertisements have you seen on your platforms as a result?

Investments since 2020

In the wake of the 2016 and 2020 elections, tech platforms launched initiatives of their own to combat foreign influence operations, many of which had played out on their platforms. Since 2022, however, we have seen a rollback of a number of the election-related policies previously put in place, as well as industry-wide layoffs that have reportedly resulted in the diminishing of many of the integrity and content moderation teams tasked with election integrity. For example, Meta and Alphabet (the parent company of Google and, therefore, YouTube) initiated a series of layoffs, including gutting key teams dedicated to platform integrity and combatting the spread of false information. YouTube began allowing election denialism content to appear on the platform again and Meta stopped enforcing its transparency rules around political advertisements.

Everyone:

  • Compared to 2020, are you allocating more or less of your budget to election-related efforts?
  • How many employees do you have on staff at this time assigned to the US market who are solely dedicated to monitoring and moderating foreign malign influence efforts on your platform?
    • How does this number compare to two years ago?
    • How will that number change in the period after the US presidential election, particularly during the post-election certification period?

Threats to Election Officials

The spread of false election information online has led to increasing threats, harassment, and intimidation of public servants — election officials who come from and serve our communities. As a result, 92% of local election officials surveyed by the Brennan Center say that they have taken action to increase security since 2020; ranging from additional cybersecurity training and protocols to changes to the physical security and layout of their offices. In December 2020, home addresses and other personal details of Issue One’s Faces of Democracy members Brad Raffensperger, Gabe Sterling, Jordan Fuchs and Jocelyn Benson, and National Council on Election Integrity member Kim Wyman were published on the Iranian-linked website “Enemies of the People.” Their photos were also posted and marked with superimposed crosshairs. The Enemies hit list was shared on social media using the hashtags #remembertheirfaces and #NoQuarterForTraitors. Election offices all over the country have held narcan training and changed mail-opening processes after election offices in five different states received mail with powdery substances, including Fentanyl. Some election officials now feel it’s necessary to wear bulletproof vests to work.

A majority of Americans express concern about social media and the 2024 election in a national poll fielded this month by Issue One (N=1,500; September 3-9, 2024). More than half of Americans (54 percent) are extremely or very concerned about foreign groups from places like Iran, China, and Russia using social media to influence the election. A similar proportion of Americans (54 percent) are extremely or very concerned about groups using social media to try to cast doubts on the results of the election, and a slightly larger majority of Americans (58 percent) are extremely or very concerned that groups will use social media to incite violence after the election. Finally, the majority of Americans are not confident in social platforms to prevent the spread of false information.

Everyone:

  • How are your platforms handling threats or harassment targeting election workers and candidates, particularly in more polarized election environments? What improvements have been made to improve your ability to detect and remove this type of harmful content?

Meta & Google:

  • How do your platforms manage content that encourages violence targeting election workers or voters? Are there specific measures taken during election periods to enhance the platform’s detection of such violent or graphic content?
  • Election officials across both parties (as well as nonpartisan actors) have reported a sharp increase in threats, intimidation, harassment, and violence against them and their families. Multiple election officials, including a Secretary of State in a critical swing state, have reported an inability to communicate with leadership teams at Meta and Google about online threats or false information, even in the event of an emergency. How can election officials engage with [Meta/Google] to address emerging threats and foreign influence efforts that they’re seeing on the ground?
  • Do you have a plan in place for when election officials are targeted and/or harassed on your platforms? Have you established lines of communication with all secretary of state offices and do you have a plan in place to communicate with them when they are under threat?

Researcher Access & Transparency

In advance of the 2022 midterm elections, Meta detailed its plans to help combat election and voter interference. Among its listed measures were improving researcher access and increasing transparency about political advertising. In spite of these promises, however, Meta chose to shut down the critical researcher access tool, CrowdTangle, in the middle of an historic election year and just months before the US election. Despite Meta’s claims, its replacement for CrowdTangle (the Meta Content Library) is far from an adequate replacement; it lacks much of the functionality that made CrowdTangle effective and has been closed off to many researchers who used to have access.

Meta:

  • Ahead of the 2020 election, Meta promised to label state-controlled media on Facebook and Instagram, as well as in Meta’s Ad Library. However, research from the Center for Countering Digital Hate in 2022 found that 91% of posts containing content from Russian state media about Ukraine was not covered by this policy and did not display with any labels.
    • Why should Americans trust that Meta will effectively label state-controlled media in 2024?
    • What steps is Meta taking to ensure that it is properly labeling most or all content from foreign state-backed accounts?
  • Meta recently ended the use of CrowdTangle, a tool that previously enhanced transparency around content. Why did Meta choose to shut down CrowdTangle now, during this critical pre-election period?
  • How will Meta improve the functionality of the Content Library to ensure that this tool has the same capacity that CrowdTangle did?
  • What percentage of approved researchers have actually gained access to the site? How will Meta ensure all of these researchers are granted access swiftly?
  • What percentage of researchers, who have been approved for access to Meta’s data, have actually been granted access? How will Meta ensure that all approved researchers are provided access swiftly and without delay?
  • TikTok has begun releasing daily updates on content removals and user violations. With TikTok now surpassing Facebook and Instagram in global user numbers, what limitations prevent Meta from doing the same? Why hasn’t Meta prioritized similar transparency practices?

Protecting Vulnerable and Non-English Speaking Communities

Foreign interference in US elections has increasingly targeted marginalized communities, particularly Black and Hispanic voters, with disinformation campaigns and voter suppression tactics. According to a Senate Intelligence Committee report, Russian operatives specifically focused on "racial issues and polling locations" to discourage minority turnout in 2016. More recently, the Department of Justice indicted several Russian nationals for operating a covert propaganda campaign targeting US audiences, including minority communities.

This year, Free Press conducted a public opinion survey with BSP Research and African American Research Collaborative, finding that daily Spanish speakers spend more time online and more time using social media yet nearly half (47 percent) report that they encounter stories they believe are misinformation “very often” or “some of the time.” As we approach the 2024 elections, it is imperative to understand how tech companies are working to protect vulnerable communities from targeted foreign interference.

Everyone:

  • How many employees do you have on staff at this time who are assigned to the US market and solely dedicated to monitoring and moderating foreign malign interference efforts in non-English languages?
  • Can you provide an estimate for the number of full-time moderators you employ in each major non-English language in the US? (Ask specifically about Spanish, Chinese, and Tagalog.)
  • When you rapidly disseminate critical election security information, how will you ensure it reaches non-English speaking and digitally underserved communities?
  • How are you working to prevent the exploitation of your platforms for voter suppression tactics that often target communities of color, non-English speaking communities, and other marginalized groups?

Cybersecurity

From the Russian hack of the Clinton campaign in 2016 to the Iranian targeting of the Trump campaign this year, adversaries have repeatedly demonstrated their willingness to exploit vulnerabilities for political gain during election cycles. This extends beyond individuals and campaigns to state and local election offices as well, who have been repeatedly targeted by foreign actors seeking to sow distrust of the electoral process.

Everyone:

  • What specific protocols do you have in place to notify campaigns about targeted foreign hacking attempts in real-time, especially smaller or local campaigns with fewer resources?
  • How are you addressing potential disparities in cybersecurity protection for smaller, local campaigns that may lack the resources of larger operations?

Google:

  • Since distributing 100,000 Titan Security Keys, what feedback or results have you received from campaigns and election workers on their effectiveness in preventing cyberattacks? How do you plan to scale this program in the lead-up to the election?
  • Google has expanded its Advanced Protection Program and its partnership with Defending Digital Campaigns. Can you provide an update on how many campaigns have benefited from these security features? What additional steps are being taken to protect high-risk individuals during the 2024 election?
  • How are Google’s Threat Analysis Group (TAG) and Mandiant Intelligence coordinating with campaigns and government officials to address real-time threats such as cyber espionage and coordinated influence operations? Can you share examples of successful interventions from these teams?

Microsoft:

  • A recent report by the Cyber Safety Review Board criticized Microsoft for "a cascade of errors" that allowed state-backed Chinese cyber operators to breach the email accounts of senior U.S. officials, including Commerce Secretary Gina Raimondo. The board highlighted preventable errors and an inadequate security culture within Microsoft. What concrete steps has Microsoft taken since this incident to overhaul its security practices and ensure that such failures do not occur again?
  • The Cyber Safety Review Board also criticized Microsoft for a lack of transparency and a slow response in acknowledging the breach, which affected multiple U.S. agencies. How is Microsoft improving its communication and transparency practices regarding cybersecurity incidents, particularly when they involve high-profile targets or national security concerns? How will Microsoft ensure timely and accurate information is shared with stakeholders in the future?

Artificial Intelligence

Given the recent advancement of accessible artificial intelligence tools that can rapidly generate and manipulate content, 2024 has been labeled by many as the first “AI election.” Generative AI tools can be used to systematically create hyperrealistic content (“deepfakes”), giving foreign adversaries an array of comparatively low-cost tools that can be used to further sow divisions among the electorate and destabilize domestic politics. Identifying and labeling misleading content produced by generative AI will be a central challenge for platforms in 2024 — many of whom will also be deploying their own algorithmic systems to recommend accurate content and to detect and remove deceptive content.

Meta:

  • Meta has historically struggled with identifying deepfakes from its platforms. How have your organizational processes, tools, and teams adapted in response to these challenges? What tools are currently being deployed to detect and remove deepfakes?
  • How does Meta currently distinguish between foreign and non-foreign actors on its platform? Is AI being utilized to make these distinctions, and if so, what is the accuracy rate of these AI predictions?

Microsoft:

  • Microsoft has launched the Content Credentials as a Service tool to allow campaigns to authenticate and digitally sign media using watermarking credentials. Can you provide an update on the deployment of this service? How many campaigns have begun using it?
  • What measurable impact has the Content Credentials service had in terms of protecting campaigns against tampered or misleading content? Are there any notable examples where this tool has prevented misinformation or deepfakes from spreading?
  • How is Microsoft planning to expand the availability of Content Credentials beyond political campaigns, and what steps are being taken to ensure its adoption by a broader range of users?
  • [Microsoft] What strategies is the Campaign Success Team using to combat AI-driven cyber influence campaigns? Have these strategies been tested in any election contexts, and if so, what were the outcomes?
  • Microsoft has endorsed the bipartisan Protect Elections from Deceptive AI Act. Beyond this endorsement, what specific actions is Microsoft taking to advocate for the bill’s passage, and how is the company supporting its implementation?

Adobe:

  • Adobe emphasizes the importance of building an end-to-end chain of content authenticity to help people distinguish fact from fiction online. How effective has Adobe’s Content Authenticity Initiative and the C2PA standard been in preventing the spread of misinformation and deepfakes? Are there any measurable outcomes or case studies demonstrating the success of these solutions? Is there any market demand for these standards?
  • Adobe has signed the Tech Accord to Combat Deceptive Use of AI in the 2024 elections and has committed to implementing technological solutions like provenance, watermarking, and classifiers for AI-generated content. Can you provide an update on how Adobe is progressing with these initiatives? Are these tools currently being implemented in real-world scenarios, and if so, what impact have they had?
  • The Tech Accord also commits to fostering public awareness about the risks of deepfakes through education campaigns. What specific steps has Adobe taken to increase media literacy and ensure the public knows how to identify and protect themselves from manipulated content? Have these campaigns reached their intended audiences effectively?

Google:

  • With the introduction of AI tools like Bard and the generative tools in Search, what testing and safeguards have you implemented to minimize the risks of misinformation and cybersecurity vulnerabilities in election-related queries?
  • Google's SynthID watermarking tool is in beta. How widely has this tool been deployed, and have you seen measurable impacts in limiting the spread of AI-manipulated media through your platforms?
  • YouTube is requiring creators to disclose synthetic content. How do you plan to verify these disclosures, and what enforcement actions are in place for creators who fail to properly label their AI-generated content?
  • YouTube has committed to removing manipulated content that misleads users, such as deepfakes of political candidates. What tools are in place to detect AI-manipulated media, and how effective have these tools been in real-time during election seasons?

Authors

Liana Keesing
Liana is a Campaign Manager for Technology Reform at Issue One, where she works on issues at the intersection of AI, national security, privacy, and democracy. Before Issue One, she served as an HAI Policy Fellow on the U.S. Senate Committee for Homeland Security and Governmental Affairs, where she ...
Jamie Neikrie
Jamie Neikrie is the Legislative Manager for the Council for Responsible Social Media (CRSM) and has been with Issue One since 2021. A distinguished professional in legislative strategy and advocacy, Jamie leads efforts to implement meaningful reforms on Capitol Hill, focusing on advancing privacy p...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics