Diasporas, Disinformation, Disenfranchisement: Addressing Non-English Election Disinformation
Samantha Lai, Jenny Tang / Oct 24, 2022Samantha Lai is a research analyst at the Carnegie Endowment for International Peace’s Partnership for Countering Influence Operations (PCIO) project within the Technology and International Affairs Program. Jenny Tang is a PhD student in Societal Computing at Carnegie Mellon University and affiliated with CyLab, the CMU Security and Privacy Institute.
This essay is part of a series on Race, Ethnicity, Technology and Elections supported by the Jones Family Foundation. Views expressed here are those of the authors.
As the U.S. midterm elections approach and the 2024 presidential campaign cycle looms, the problem of election mis- and disinformation in non-English languages is of growing concern. And yet, there are substantial diversity gaps in mitigation efforts against election disinformation affecting diaspora communities and historically marginalized groups that are an important part of the electorate.
We assess that there is a lack of resources dedicated to non-English and non-Western platforms. We identify three problems with existing disinformation work at the intersection of ethnicity, technology and elections:
1) While there are increasing efforts, mainstream platforms fail to adequately address non-English election disinformation that proliferate on their platforms.
2) Current disinformation studies center around Western social media companies such as Twitter and Facebook. However, there is little study or oversight of disinformation on non-Western platforms, which communities of color and immigrant communities often use to communicate.
3) Existing research fails to account for contextual concerns of communities of color, often legitimate ones shaped by years of oppression and colonization.
In turn, interventions must be improved, with the acknowledgement that there will not be a one-size-fit-all solution across diverse communities.
A number of solutions might improve matters by bridging gaps between academia, civil society, the private sector, and government to create collaborative solutions that address multilingual electoral disinformation. By exploring dynamics around non-English disinformation and efforts to mitigate it, we seek to better serve the historically underserved communities most adversely affected by disinformation and voter suppression efforts.
Blindspots Introduce Vulnerabilities
Over the years, there has been growing political pressure and public attention for tech companies to improve their content moderation efforts. However, as most domestic political pressures surround English-language disinformation, non-English language content continues to proliferate with few safeguards in place. For instance, last year Facebook whistleblower Frances Haugen revealed that Facebook dedicated 87% of its misinformation-countering spending on English language content. Twitter whistleblower Peiter “Mudge” Zatko voiced similar concerns, pointing to how the platform lacked adequate measures to combat disinformation in non-English languages. These problems also extend to non-Western and non-mainstream social media platforms, creating an information ecosystem that leaves communities of color at elevated risk for exposure to election-related disinformation.
Not only do platforms have a bias towards the English language, those that are charged with investigating the platforms also have a bias towards certain firms. While members of Congress have lately shifted their attention to the role social media algorithms play in spreading disinformation, the focus of their hearings has remained primarily on larger platforms such as Facebook and Twitter. A similar trend has been observed in academia. A research study from the Carnegie Endowment for International Peace indicated that out of twenty-one empirical research pieces on the role of social media in spreading disinformation, fourteen focused only on Facebook and Twitter. YouTube and Instagram receive significantly less attention, while other non-Western platforms such as WeChat and Weibo are also understudied.
These gaps create a deficit in understanding how election disinformation affects communities of color and diaspora communities in the U.S. that frequently interact with non-English content, and use non-Western platforms beyond Facebook and Twitter. U.S. Census data indicates that as of 2016, 15% of the adult population speaks a language other than English at home. Data from 2020 shows that China-owned platform WeChat has around 19 million daily active users in the U.S. Simultaneously, significant diaspora populations also congregate on messaging apps such as Line, Kaokao Talk and a range of other applications and platforms channels that may have less rigorous moderation and receive less scrutiny. Then there is the landscape of non-English radio and television programs that have also played a part in the corruption of the information ecosystem over the years. Cumulatively, this creates a much larger and more complex ecosystem than what is usually included in the conversation on election disinformation.
There are also cultural and social factors that shape how communities of color are uniquely affected by disinformation in the existing electoral system.First, long-standing social inequities and a long history of oppression and marginalization have fueled uncertainty and distrust of major institutions among these groups. In turn, such uncertainty undermines communities’ abilities to discern disinformation, due to a lack of credible authority to rely on for the truth.Secondly, ethnic groups are not a monolith, and there are often significant variations in terms of political backgrounds and their perceptions of American policies. There are a myriad of nuances regarding what makes different pieces of disinformation uniquely compelling to separate groups, many of which go unexamined. Where content moderation and other efforts to mitigate disinformation fail to consider these factors, election disinformation proliferates. Thirdly, many of these groups use languages beyond English to communicate on social media platforms, and also operate on other channels that have less rigorous moderation and scrutiny. Where safeguards fall short, electoral disinformation proliferates.
In this essay, we seek to examine how the aforementioned factors affect Asian American and Latinx communities in the United States, both of which are groups that have significant presence on non-English and non-mainstream social media platforms. In identifying shortcomings of existing approaches, we will then explore ways in which these groups have been affected by past disinformation efforts, and what will be needed to rectify these wrongs in the upcoming 2022 midterm elections and beyond.
1. Disinformation and Asian Americans
Historical oppression and disenfranchisement
Asian Americans have faced a long history of disenfranchisement from the ballot box. Legislation such as the Naturalization Act of 1790 and the Chinese Exclusion Act of 1882 prevented immigrants of Asian descent from acquiring citizenship, stripping from them the ability to vote. Even with the passage of the Voting Rights Act of 1965, barriers to voting extend to the present day—from gerrymandering to discriminatory voter ID laws to a lack of language accessibility at polling stations—making it difficult for voters of color to access the ballot box.
Voter turnout rates for Asian Americans often lag behind other ethnic groups, with a turnout rate of 47% during the 2016 U.S. presidential elections, compared to 66% of Black voters and 64% of non-Hispanic voters. This can also be traced in part to a lack of outreach from politicians. While 53% of Americans reported that they had been contacted by candidates or parties in 2012, the number dropped to 31% for Asian Americans. Collectively, these systemic factors have created a sense of disconnect between many Asian American communities and the American electoral system at large. Such disconnect has created a communication gap that leaves these communities vulnerable to electoral disinformation, in particular on narratives about unreliable mail-in ballots, danger or law enforcement at the polls, and whether or not their vote could be changed or compromised after it had been cast.
Racial groups are not a monolith
The term “Asian American”—defined by the census as “Chinese, Indian, Filipino, Vietnamese, Indonesian, Korean, Japanese, Pakistani, Malaysian: and more”—encompasses a diverse group of demographics wherein national identity often supersedes the term “Asian” for individuals lumped under this broad term. There is great linguistic diversity across these groups, and disinformation specialists and non-profit organizations have flagged such diversity as a complicating factor that makes disinformation difficult to track. This is further complicated by varying political relationships across these countries of origin, which add further layers to how each group identifies with narratives on American foreign policy. Nguyen and Kuo et. al highlight these distinctions in studying the 2020 U.S. presidential elections. In their piece in Harvard’s Misinformation Review, they detailed how Vietnamese-American communities were compelled by misinformation depicting President Joe Biden and Vice President Kamala Harris as communists, which appealed to the older generation’s fears of a communist government. For Taiwanese-Americans, meanwhile, conversations on China-Taiwan relations received a great deal of attention. In similar circles, Asian American groups with contentious relationships with China such as Taiwan, Cambodia, Vietnam, and Laos were more susceptible to misinformation on Hunter Biden’s relationship with China. These contexts influence the way those with different backgrounds are compelled or affected by disinformation surrounding the electoral system and various candidates.
Platforms and language matters
Roughly two-third of Asian Americans, often first-generation immigrants, have limited English proficiency. Therefore, Asian American internet users primarily consume ethnic media and engage with social media platforms other than Facebook and Twitter. Nearly one in six Asian Americans discuss politics on social media sites such as WeChat, WhatsApp, KakaoTalk and Line. Platforms of choice depend upon countries of origin. Chinese Americans from the mainland tend to use WeChat and Weibo. WhatsApp is popular for those from Hong Kong and India, while KakaoTalk is primarily popular among South Koreans. Meanwhile, Line is popular for those from Japan and Taiwan.
Information ecosystems and the expression of narratives vary across platforms, as do challenges in content moderation. For example, WeChat’s parent company Tencent is based in China, and norms for content moderation are established by the Cyberspace Administration of China. Line, meanwhile, is a consolidated subsidiary of South Korean company Naver and Japanese company SoftBank group. As these platforms are based overseas, this complicates questions over the role U.S or European law or norms in the moderation of content on these platforms, what international governance norms should be, and the role foreign electoral interference plays in the domestic political discourse.
Most of the aforementioned platforms, such as WhatsApp and Line, are also encrypted messaging apps. Instead of posting on a public platform, users circulate information across end-to-end encrypted chat groups. What this means is that neither the platform itself nor independent auditors have the capacity to monitor, attribute, take down, or prevent the proliferation of misinformation passed on from person to person. This makes content moderation and other tasks, such as collecting datasets and conducting effective analyses on disinformation in these platforms, difficult. As a result, the majority of current academic work is conducted on publicly-available data on platforms such as Twitter or Reddit, in part due to considerations surrounding ease of and ethical considerations regarding data collection.
Issues with non-English disinformation also extend beyond these social media platforms, encompassing an extensive network of non-English newspapers, television, radio programs and more. For Chinese-Americans during the 2020 U.S. presidential elections, The Epoch Times promoted disinformation on both the elections and the COVID pandemic through Facebook ads, print newspapers, its television network New Tang Dynasty (NTD), and affiliated radio network Sound of Hope. Through various platforms, the outlet produced documentaries, articles and interviews on QAnon, the deep state, electoral fraud and other disinformation narratives. In 2017, the newspaper brought in $8.1 million in revenue, with the television station bringing in $18 million. Such profits indicate a wide base of viewership and support and show that it is not solely online platforms that disseminate disinformation narratives.
2. Disinformation and Latinx Communities
Historical oppression and disenfranchisement
Latinx communities have also struggled with a long history of disenfranchisement and finding a place in American democracy. In 1909, an Arizona law required voters to pass an English literacy test, with the explicitly stated intention of blocking the “ignorant Mexican vote”. In 1923, Texas implemented White-only primaries to bar both Black and Latinx populations from participating in elections.
These injustices extend into the modern age. Notably, while Puerto Rican residents are citizens of the U.S., they continue to be unable to cast a meaningful vote for key offices such as President under the current system. Attempts at weaken voting rights protections, such as Shelby County v. Holder and Brnovich v. DNC, have enabled the creation of discriminatory voting rights policies that disproportionately hurt Latinx communities. These measures include the shortening of early voting periods, closing of polling places, aggressive voter roll purges and more. These policies are further enabled by Republican narratives of voter fraud conducted by undocumented immigrants, which in turn heighten voters’ fears of harassment or violence at the ballot box.
The alienation of Latinx voters is reflected in low turnout rates. During the 2016 U.S. presidential elections, Hispanic voters had the lowest turnout among all racial groups at 47.6%, compared to 65% of white voters. However, the most recent 2020 U.S. presidential elections saw a dramatic uptick in Hispanic voter participation, with the figure growing by nearly 30%. Ensuring a healthy, participatory voting environment can enable the continuation of these trends, and closer study of how disinformation threatens this can shed light on further steps.
Racial groups are not a monolith
Similar to the term “Asian American,” the label “Latinx” in America covers a diverse group of a range of nationalities, with people from a range of over 15 countries, including Mexico, Colombia, Cuba, Brazil and beyond. Varying nationalities also shape the political leanings of different groups, and the type of narratives they find compelling. For example, voters from countries such as Cuba and Venezuela—which have complex relationships with communism—have been more likely to find narratives claiming Biden is partial to Castro-style socialism compelling.
There is a common assumption that as immigrants or descendants of immigrants themselves, Latinx voters in the U.S. tend to universally be in favor of more lenient immigration policies. A New York Times article on how right-wing immigration policies have appealed to Latinx voters in the 2020 elections challenges this claim, in particular looking at the 8-point swing Trump enjoyed from Latinx voters in the 2020 elections, and the great gains his campaign made in areas such as Texas’s Rio Grande Valley, which has a significant Latinx population. Interviewees in the piece found Trump’s calls for harsher immigration policies compelling, arguing that their parents had come in the “right way” while others had not. In many Latinx communities living by U.S. borders, some have also become well-acquainted with border patrol agents, or are part of those enforcement agencies themselves. Failures to account for these nuances undercut efforts to understand why specific disinformation narratives are compelling to certain groups within the community.
Platforms and language matters
The underperformance of Democrats among Latinx voters could be attributed to a lack of understanding and strategy in appealing to this voter base. Yet it is also undeniable that Latinx voters are faced with an unhealthy information environment rife with disinformation. During the 2020 election cycle, those tracking Spanish-language disinformation noted that social media ecosystems were rife with rumors of electoral fraud, QAnon conspiracies and fearmongering attacks on the LGBTQ+ community, specifically at the transgender community and transgender children. Much of the existing landscape promotes right-wing conspiracies, with little coverage or counter from the left.
Meanwhile, mainstream media platforms have fallen behind in moderation, failing to apply similar degrees of rigor to Spanish-language disinformation as they do for English-language disinformation. The widespread use of encrypted messaging apps such as WhatsApp and Telegram, commonly used by Latinx Americans, further complicates efforts to curb the spread of disinformation. Spanish-language political news influencers have also played a key role in facilitating the further spread of political disinformation, for example stating that Democrats were going to have Cuban Americans storm the U.S. border to disrupt the elections. Often, these individuals start with promoting their beliefs on mainstream platforms such as YouTube and Facebook before moving the discussion to private WhatsApp and Telegram groups.
The spread of such content is by no means limited to social media. During the 2020 election cycle, Spanish language radio stations such as Radio Mambí and Actualidad Radio also played a part in promoting false narratives that the election had been stolen, or that Biden was a pedophile. With the versatile nature of the information environment where narratives flow across platforms and mediums, information that has been presented over radio waves has also made its way onto social media platforms and vice versa.
Proposed Solutions
All of these problems are formidable but not unsurmountable. In the following section, we examine existing limitations faced by (1) academia, (2) civil society, (3) technology companies, and (4) governments. In evaluating existing efforts by the aforementioned actors to combat non-English misinformation, we propose further actions that could be taken to bolster these efforts.
In the process of such evaluation, it is also important to recognize that efforts and incentives across these sectors are all interlinked. The lack of resources studying disinformation dedicated to non-English and non-Western platforms have resulted in a lack of interventions from the platforms themselves. As academics and civil society lack information on the efficacy of certain interventions or proprietary datasets, platforms become unwilling to invest in unproven interventions, leading to a self-fulfilling prophecy of inaction in addressing diverse disinformation. In recognizing this, it is also important to highlight that any comprehensive solution will require extensive cross-sectoral work and collaboration, and that none of this can be done in isolation. Only with these collective efforts could we begin making headway in tackling the problems we see today.
1. Academic Research
While a plethora of academic research on disinformation and content moderation exists, its focus has been relatively narrow and overwhelmingly focused on Western platforms and English-language content. This overlooks significant portions of already underrepresented populations, as well as phenomena specific to these contexts. With a tendency to make generalized conclusions based on an “average user,” these lenses may miss out on the nuances of various subgroups. Nevertheless, academic researchers have the ability to collect large-scale, high-quality datasets that otherwise may not be available outside the platforms themselves. Academic work has also created new innovations on automated content moderation as well as other interventions that could greatly improve the disinformation space. For example, researchers have found ways to identify disinformation websites by structural features, using stance detection to detect possible disinformation, or automated detection of fake news and disinformation.
Nevertheless, it is necessary for academics to look beyond these traditional populations and domains in their work. One step may be replication studies that look at intervention efficacy beyond English-speaking populations, or beyond specific platforms. For example, recent work suggests that certain existing interventions (such as accuracy ‘nudging’ and crowdsourcing) piloted on Western audiences may be effective across cultural contexts and languages. Thus, applying existing research frameworks in more diverse settings could result in interventions that could be successfully applied across different contexts. Furthermore, researchers could be incentivized to focus more on underrepresented communities (and languages and platforms) or to replicate existing results on diverse populations.
Linguistic diversity in disinformation and content moderation work presents inherent challenges, as much existing work in the academic space focuses on English-language content. Thus, there has been little attention dedicated to processing non-English language text, with non-Latin languages also presenting significant barriers to automation. Furthermore, as English tends to be the lingua franca of not only the Internet, but also international academic research, the majority of large-scale data collected tends to be in the English language. Indeed, lists of keywords or hashtags used to inform the datasets researchers use are often in English. Even when data is collected in a multitude of languages, sometimes due to constraints of the researchers’ language familiarity or the expectations of the target academic audience, analysis is only conducted on English-language data. Even when translation is possible, a direct translation may not encompass cultural context and nuances. We call upon academics to develop more diverse linguistic models, and involve and recruit researchers from different linguistic and cultural backgrounds to broaden the scope of high-quality data collection.
Indeed, diverse viewpoints and cultural expertise when examining complex questions such as disinformation would greatly benefit academic work. The ivory tower of academic research tends to be relatively homogenous in terms of linguistic, socio-economic, and ethnic background. Women and people of color are often under-represented in STEM fields such as computing, as well as in post-graduate education as a whole. Therefore, those with the relevant linguistic or cultural expertise may not be represented, and thus be unable to provide crucial insights. However, this is not a fatal flaw, as the diversity of researchers and viewpoints have been increasing in academia in recent years, in part due to increased funding for individuals and projects that increase diversity.
Continued government and industry funding for underrepresented viewpoints and novel contributions will contribute to diverse and groundbreaking research. Nevertheless, the academic community must work to retain diversity and listen to the voices of under-represented populations on topics of their expertise. In addition, we further encourage collaboration of academics with civil society and other organizations with the relevant expertise to tackle these broad, cross-cutting problems.
2. Civil Society
Civil society actors have consistently been the fastest to sound the alarm in noticing and acting against disinformation targeting communities of color. A myriad of initiatives have been started by individuals within these communities in hopes of tackling the disinformation they see proliferating their social circles. Examples of this include the Xīn Shēng Project (formerly known as the WeChat project) a project started by second-generation Chinese-Americans during the 2020 elections to address misinformation they observed on WeChat. There have also been non-profit initiatives calling for further actions tackling observed problems, such as the Disinfo Defense League, “a distributed network of grassroots, community-based organizations that are building a collective defense against disinformation and surveillance campaigns that deliberately target [...] communities of color.” Research by non-profits have helped quantify existing problems, and advocates have developed action plans calling on the U.S. government and tech companies to improve their existing processes. Journalists have also played an integral role in documenting narratives and key players that have promoted disinformation in the past.
At the end of the day, the actors most well-equipped in combating misinformation within different communities of color will be individuals that are part of them, and organizations that have a history of serving, caring for, and understanding these communities. These organizations have the expertise to reach out and talk to individuals in the communities affected, and approach the problems and questions with contextual knowledge and expertise.
More needs to be done to facilitate the work of civil society actors seeking to combat disinformation in the communities they serve. Creating structures for collaborations between academia and civil society allows academics to put the interventions and systems developed into practice in communities that would benefit from these interventions. In turn, civil society and experts could bring their expertise to the relatively homogenous work currently conducted in academic settings, and also benefit from the resources and at-scale data collection and analysis of academic research institutions.
Efforts by civil society actors are often limited by a lack of resources and sway. There should be more grants and programs encouraging initiatives to combat disinformation targeting communities of color. Meanwhile, technology companies should collaborate with and further engage with civil society actors, in better understanding the disinformation landscape targeting communities of color, and the types of information that should be flagged or taken down to prevent further spread.
In terms of approaching the community spread of disinformation, it is also integral to equip community members themselves with the knowledge and skills required to inoculate themselves and those around them from disinformation. An expansion of existing internet and media literacy initiatives, in particular targeting communities of color, could be helpful in encouraging more discourse and critical consumption of social and news media.
3. Industry and Tech Platforms
Platforms are uniquely positioned to intervene effectively in combatting the dissemination of disinformation. Companies are the actors that have the most granular and wide-ranging data that would be relevant to analyzing and tackling these problems. Unfortunately, much of this data is currently considered proprietary. In the past, companies have increased restrictions on the access and collection of this data, notably with Facebook’s disabling of accounts tied to New York University’s Ad Observatory in 2021, citing concerns over user privacy. Yet such concerns should not negate the benefits that could come of expanded research access, and providing access with necessary safeguards could ensure the best of both worlds. Recently, Twitter announced the launching of the Twitter Moderation Research Consortium, which would allow selected researchers access to data that could enable research on attempts to pollute the information environment. We encourage other platforms to share data and collaborate with researchers and civil society, in order to enable cross-cutting and innovative solutions and interventions.
Furthermore, effective interventions require people with knowledge and expertise in the specific area, whether it be linguistic, cultural, or contextual. In addition to working with civil society groups and those in academia, we call for platforms to have specific funding for teams that include individuals with relevant expertise, particularly around events that may have high amounts of disinformation spread. This would also require not only the platforms’ executive boards, but also their engineers, to think beyond the basic assumptions about their users. Currently the code and filters for content moderation often are more substantially developed for the English language. Companies can consider forming guidelines that if a platform has significant proportions of users of a demographic, these individuals must be represented within decision-making that can adversely affect them. Furthermore, such actions by the platform may also create beneficial financial effects in attracting a broader user base.
4. Government
For tech companies, incentives to moderate content can also be offset by profit incentives. Disinformation can often be profitable, as it can drive up engagement, and thus clicks and revenue. Government policies requiring increased transparency, or the creation of further investigative initiatives into how tech companies are handling disinformation internally, could help better map out what it will take to increase incentives for tech companies to commit to disinformation-fighting efforts.
For non-Western platforms or companies based in other locations, there may be additional hurdles to content moderation and combatting disinformation. To resolve these issues, governments could work to create an international framework on standards regarding content on different platforms. This already exists in some capacity with the 2018 EU Code of Practice on Disinformation and the 2022 Strengthened Code of Practice on Disinformation, for which companies operating within the EU have agreed on standards and measures necessary to tackle disinformation on their platforms. These standards would create guidelines for the removal of disinformation that would not be rooted in any language or context. Furthermore, they would create incentives for platforms themselves to take more care in aiding efforts against disinformation. Certainly, such a code could be hampered by limited enforcement power, or refusals for cooperation in the case of platforms based in countries with more complex relationships to the U.S. While this might not be a fix-all solution, initiatives like these have the capacity to improve the information ecosystem at large.
Conclusion
Non-English disinformation is a formidable and under-addressed problem. In this essay, we addressed how such disinformation has affected the Asian and Latinx populace within the U.S., and proposed a framework to better understand the scale of the problem.
- First, it is important to recognize the role that the historical oppression of marginalized groups plays in leaving them vulnerable to disinformation.
- Second, one must take into consideration variations in political backgrounds of racial and ethnic groups, to understand why certain disinformation narratives hold the power they do.
- Third, there must be careful examination of how these groups primarily communicate, not just on mainstream media platforms but on other non-English platforms, in addition to radio and television.
In order to address the impacts of disinformation on communities of color, collaborations across academia, civil society, technology platforms, and government are necessary. Academics should work to expand on existing research work, applying mainstream interventions in more diverse contexts, improve the linguistic diversity of language processing models, and elevate diverse voices. Civil society actors have great capacity to combat disinformation on the ground, and can benefit from data-sharing from academia, further engagement with technology companies, and more available resources such as grants in encouraging outreach. Such work should also be focused on improving digital literacy of those on the receiving end of this information, to help them better equip themselves in spotting disinformation in their communities. For industry and tech platforms, there should be more work to expand data-sharing capacities, and improve internal diversity. The U.S. government and its partners, meanwhile, can work to improve incentives for content moderation, and to create an international framework for social media platforms across the board to standardize best practices for content moderation.
All of these will be integral steps in improving existing content moderation processes for non-English disinformation, improving the information environment both for Asian American and Latinx communities, and for other communities of color within the U.S. and around the world.