Home

Imagining Solutions at the Intersection of Elections, Race, and Disinformation

Anuradha Herur, Lilly Min-Chen Lee / Nov 7, 2022

Anuradha (Anu) Herur has a MALD from the Fletcher School at Tufts University in tech policy and human security, and currently works a dual role at Digital Planet and the Hitachi Center for Technology and International affairs. Lilly Min-Chen Lee is an M.A. candidate at the Fletcher School at Tufts University.

This essay is part of a series on Race, Ethnicity, Technology and Elections supported by the Jones Family Foundation. Views expressed here are those of the authors.

Brunswick, Georgia USA - December 31, 2020:. Michael Scott Milner/Shutterstock

The Brennan Center, a non-partisan law and policy institute, tracks new voter legislation passed across the country every year. Between January 1 and May 4 of this year, six states (Alabama, Arizona, Florida, Georgia, Kentucky, and Oklahoma) passed nine laws targeting election interference, while two states (Arizona and Mississippi) also enacted laws that restrict access to the vote. In total, 27 states have introduced 148 election interference bills, and almost 400 restrictive voting bills have been introduced in 39 state legislatures just this year. These laws are motivated in part by false voter fraud claims and narratives of Black and Brown criminality. Coupled with the fact that voters of color have been singled out in election disinformation campaigns, these phenomena are bound to have a profound effect not only on democracy, but also on the civil rights of minorities in the United States.

Disinformation is not a new or novel issue, but considering the exponential increase in election disinformation in the United States, it is more important now than has ever been to find ways of combatting and curbing it. The speed of online communication (both in terms of the ease of making content and the velocity of its viral spread) and the anonymity offered by the internet have provided fertile ground for the proliferation of online disinformation, and increased polarization. A 2019 Pew study also found that an increasing number of Americans have come to prefer getting their news online, through websites, apps, or social media. The same study also found that more Americans now get their news from social media sites, rather than print newspapers, with Facebook continuing to dominate as the most common social media platform used for news.

In this environment, many Black and Latino voters have found their social media pages flooded with election disinformation, including memes encouraging them to refrain from voting. While the Senate Intelligence Report on Russian interference in the 2016 elections found that the Kremlin-backed Internet Research Agency specifically targeted African Americans as part of its disinformation campaigns, a Channel 4 investigation also found that through a tactic called “deterrence”, in swing states like Florida, the Trump campaign disproportionately tried to deter Black voters from voting. Simply, this kind of targeted disinformation is not an exclusively “foreign interference” problem. As such, merely concentrating on combating election disinformation from foreign actors is only one part of the solution.

Voter laws in the United States already disproportionately target people of color. For instance, consider voter ID laws: while no two states have the same voter ID laws, those that have them often put voters of color at a disadvantage, since racial and ethnic minorities in the United States (including Native Americans) are less likely to have valid identification than the white population. A 2019 study found that strict voter ID laws, as implemented by states like Alabama and Virginia, significantly affects the minority (non-White) populations in these regions, thus majorly altering the racial makeup of the voting population. In fact, even after such laws are repealed, as in the case of North Carolina, the general deterrent effects continued to prevail, because the affected voters were convinced that they would be unable to vote, and never showed up at all.

In the 2020 Presidential elections, while it was true that a greater number of Black voters voted by mail than white voters, data from the Georgia primaries of 2020 showed that in that state, nonwhite voters’ mail ballots were also rejected at much higher rates than white voters’ mail ballots. In light of this, it does not bode well that states like Georgia have also begun to introduce and enact laws restricting access to voting by mail. Further, House Bill 531, which has already passed the Georgia House, proposes to end early in-person voting on Sundays leading up to the election. Black voters have historically accounted for a higher proportion of early in-person voting on Sundays, owing to “Souls to Polls” drives organized by Black churches. Using both election administrative records, and smartphone data, researchers have also found that minority, especially Black voters spent disproportionately longer times waiting in lines to vote. The added burden of COVID also meant that in the 2020 elections especially, polling place closure and consolidation disproportionately affected voters of color, mainly because of transportation (or the lack of it).

While these laws only represent a fraction of the multitude of laws in the United States that make it difficult for minorities to vote, we believe it paints a clear picture—voting laws in the US disproportionately affect voters of color. Additionally, social media algorithms and microtargeting, which rely on machine learning, are also disproportionately discriminatory towards minorities, making them all the more easily targeted by purveyors of online disinformation. This combination of discriminatory voting laws, and discriminatory microtargeting–which involves collecting information and using demographic or behavioral data to try and predict on how users will behave– is of great concern. Because microtargeting relies on clicks, or content that users are most likely to engage with, algorithms reinforce confirmation biases of the voters, allowing the spread of disinformation. Historically, campaigns employing this type of microtargeting have involved a variety of disinformation, ranging from bad actors posting incorrect voting dates and polling locations, to threats by law enforcement and people with guns at polling places, to posts and messages exploiting minority populations’ existing doubts on the efficacy of the voting process.

The good news is that election disinformation is not an unfixable problem. Through a combination of education, advocacy, and policy-based solutions, it is possible to tackle this growing problem. Grassroots advocacy organizations were instrumental in campaigning and encouraging people to vote during the 2020 elections. A prime example of this comes from the state of Georgia, where voter turn-out increased many folds during the federal runoff elections. State government data shows that a historic 91% of Georgia’s general election voters turned up for the 2020 elections, the main reason behind it being grassroots campaigning encouraging people to show up. In fact, this kind of organizing was so effective, it was possible to flip a formerly Red (Republican) state Blue (Democrat). We believe that grassroots advocacy and campaigning can also prove helpful in efforts to educate and inform the voting population about voting and election related disinformation, and how it can influence policy decisions that can impact them in the future.

Countries like Australia have tried to address voter turnout issues by instituting mandatory voting. Doing so might ensure that more people turn up to the polls, and might preempt discriminatory voting laws entirely. However, voting is a right and a privilege, and we believe that making it a compulsory duty is not the solution in the context of the United States. In fact, it could be considered violative of the freedom of speech and free choice rights afforded by the First Amendment. Such an argument would be in line with the case law laid down in West Virginia State Board of Education v. Barnette, where the Supreme Court recognized an individual’s right to not be compelled to express an idea one does not agree with. While we agree that voting is a civic duty for all citizens, we do not agree that the government should be passing laws forcing people to choose between candidates where they don’t agree with the policies of any candidate. In light of this, solutions to discriminatory election disinformation can only start with grassroots advocacy, where voters are educated on the policy issues and candidates that they are voting for.

These efforts may be in the form of internet and media literacy classes, and fact-checking workshops to help and educate the community. Part of existing civic engagement campaigns already involved informing the voters about dates, times, and candidates. As part of these efforts, these civil society organizations are already working with other smaller groups on the ground, like church groups, who are aware of the existing nuances in the particular community. Partnering with these organizations, thus, can help combat disinformation that is targeted to Black Americans, and other people of color in these communities.

Further, we propose that with the advent of virtual reality technology, technology companies like Meta should collaborate with advocacy organizations working towards creating some form of virtual classrooms, directed at combating the growing disinformation problem. We understand that in many of the rural areas, access to the internet creates a problem in itself, let alone access to virtual reality sets and technology. Given that disinformation is transmitted on social media sites and search engines owned, operated, and curated by big tech firms such as Meta and Google, it is incumbent upon these companies to be more transparent about their design decisions, and provide resources to help combat the problem. We have seen with both print and broadcast media, regulatory norms governing content including political advertisements have been enforced to ensure that media ownership does not disproportionately influence electoral outcomes.

Education and advocacy solutions to the election disinformation problem can work only if they are backed up by rigorous policies and governance. HR 1 is, for the time being, the most comprehensive proposed legislation advocating for election reform. The reason it’s so comprehensive is because it includes aspects of previously introduced bills that aim to criminalize some forms of the most egregious election disinformation and other such fraudulent practices, that are currently not penalized under federal law. HR 1 has already passed the House (along party lines), and now sits in the Senate. The point of criminalizing voter suppression is not limited to holding platforms liable for election misinformation. It would also help deter individuals from participating in election misinformation campaigns leading to voter suppression. Further, there is no current law requiring platforms to provide data to a federal authority to investigate claims of election misinformation, and with legislation requiring it, the federal government could request relevant data from platforms, and platforms would be bound by law to comply. In fact, having federal legislation, we believe, would also encourage platforms to improve their community guidelines, making them more stringent when it comes to self-regulation. Because disinformation is not merely limited to platforms, tech companies like Google, whose primary product is a search engine, would be encouraged to alter its algorithms so that people are not redirected to false or misleading information about elections.

Further, seeing as how Section 230 does not provide digital platforms a viable defense in federal criminal liability, if HR 1 is enacted there is no need to amend or repeal Section 230. Rather, we argue that federal law should be expanded to include safeguards to prevent the most egregious forms of election disinformation. It is also true that laws already exist in most, if not all the states in the US, which make certain kinds of exceptions to free speech when it comes to restricting political activities near polling places. HR 1 would introduce such prohibitions. Simply, we do not argue for the introduction of new communication laws. Rather, we do argue that election laws in the US should be expanded to the communication space. Critics of HR 1 believe that it could also be considered unconstitutional, with some provisions of the law potentially violating freedom of speech guaranteed under the First Amendment. These concerns must be addressed.

Further, it is equally important to embed digital literacy and education into policy. It is therefore not just important to enact laws that criminalize the most egregious forms of election disinformation, but also include digital literacy as part of the legislation. This can be done by either amending the provisions of the existing legislation, or by introducing a new bill that would complement the existing bill. We argue that election disinformation is a problem that can only be solved with collaborative efforts from all parties—the government, tech companies, and most importantly, the people who are vulnerable to such disinformation. Including digital literacy as part of the law would ensure that citizens subject to disinformation would be able to help themselves in those instances that would slip through the cracks.

Finally, we also believe that a new, reconsidered iteration of the types of activities that were envisioned for the Department of Homeland Security Disinformation Governance Board should be developed. However, it is important to counter disinformation without adding to questions about the legitimacy of public discourse. While the idea behind the Disinformation Governance Board was sound, it failed in part because of communication issues, which led to unfounded accusations that it had partisan goals. Part of the issue was that it was situated within the Department of Homeland Security, which fueled suspicions that it might interfere in domestic affairs. A reimagined version of the Disinformation Governance Board could help set industry standards and overlook self-regulatory guidelines of tech companies, to ensure that they are in compliance with the existing law, and also that the law is aligned with emerging technology. Such an entity would need to be developed carefully, with a governance structure that ensures it is nonpartisan and is resilient to leadership and government changes. It needs to be adaptive and fair, keeping in mind that its purpose is not to be an arbiter of truth, but rather a guardian of the information ecosystem. If carefully designed, the entity could do a great deal to fight disinformation, both foreign and domestic, driving interagency cooperation and designing interventions in line with the First Amendment protections offered by the U.S. Constitution.

We can also learn from the examples of other countries which have already begun efforts to combat disinformation with some success. For instance, Poland, Taiwan, and Ukraine have advanced significant efforts to combat the disinformation problem. There is no single successful template to regulate election disinformation, because the local contexts and general laws surrounding free speech, democracy, elections, and voting vary across nations. Taiwan in particular has managed to leverage its civil society and non-governmental organizations to combat disinformation from China that seeks to disrupt its democracy. Organizations like the Taiwan Fact-Check Center and the Taiwan Media Watch Foundation focus on verification of facts and hold workshops to alert the public to the issue of disinformation, while organizations like CoFacts and MyGoPens have been working with platforms like Facebook and LINE since 2019 to identify, verify, and downrank dubious posts. Organizations like the g0v Movement and Taiwan Association for Human Rights have also been undertaking a variety of initiatives to bring attention to disinformation in the country.

Similarly, civil society in Ukraine and Poland has also been working with each other, and with platforms to debunk disinformation. Organizations like Ukraine-2050 and the Demagog Association have not only started implementing cross-platform collaborations and media literacy by fact-checking and debunking disinformation, but have also been holding workshops to empower and educate the public. Ukraine’s Centre for Strategic Communication, which is part of its Ministry of Culture and Information Policy, has been at the forefront of combating disinformation in the country, especially in light of the recent Russian invasion. One of the reasons for its success is that along with combatting and countering disinformation campaigns, the Centre has also been working on information campaigns to keep its citizens informed of the truth, while developing resilience among the people, using a public platform to discuss problems and develop solutions, and encouraging collaborations between the state and civil society.

While countries like Ukraine and Taiwan benefit from a more homogenous population than the United States, learning from their example, and adapting to fit these solutions into the cultural context of the US can help us combat this growing problem. Voting in the US has always been harder for people of color and minorities, because of laws that make it difficult for them to vote and serve to reinforce white supremacy, and because of campaigns that actively disseminate disinformation to prevent or discourage them from voting. These problems are not mutually exclusive, and election reform can only happen in its entirety if we address the rising disinformation problem through collaborative solutions of advocacy, backed up by rigorous policy.

Authors

Anuradha Herur
Anuradha (Anu) Herur has a MALD from the Fletcher School at Tufts University in tech policy and human security. She currently works a dual role at Digital Planet and the Hitachi Center for Technology and International affairs. Her research focuses on digital disinformation, free speech, and content ...
Lilly Min-Chen Lee
Lilly Min-Chen Lee is an M.A. candidate at Fletcher School at Tufts University, specializing in open-source research using Chinese language sources to advance academic research and inform policy analysis. She is well-versed in dissecting PRC's information operation, its cyber capability and conducti...

Topics