Home

Donate

Anti-Immigrant Lies Online, Stop the Steal 2.0, and How to Fight Back

Koustubh (K.J) Bagchi / Sep 25, 2024

K.J. Bagchi leads The Leadership Conference’s Center for Civil Rights and Technology as vice president.

Springfield, Ohio: Springfield city mural, photographed on September 15, 2024. Shutterstock

We all deserve to feel safe and be certain about the truthfulness of what we read online about elections. Unfortunately, Springfield, Ohio has been at the epicenter of online lies turning into very real threats of violence. This community is on the receiving end of bomb threats on its city hall and schools. Children are in virtual-only settings and campuses have closed down. People are scared.

Springfield isn’t the only community reckoning with what happens when misinformation is allowed to flourish online. This election season, voters across the United States have been hit with a barrage of lies around our immigrant neighbors. The claims are ludicrous and not worth repeating. But they cannot be ignored because they’re leading to real world harm to people and to democracy.

While today’s online forums may be relatively new, disinformation wielded against immigrant communities to maintain white supremacy has a long history. Since the late 1800s, when more non-European immigrants settled in the U.S., this rhetoric rears its ugly head year after year. This year we’re seeing it aimed at Haitian and Latino immigrants most acutely.

The thinly veiled intent behind these lies is about amassing power at the expense of others. Disinformation, by definition, is a lie with the intent to mislead. These lies are designed to create barriers to discourage newly naturalized citizens who are eligible voters and voters of color from accessing the ballot box for fear of retribution. With armed militias being given the green light by political operatives to “stop the steal” at polling locations, why would communities feel safe exercising their right to vote?

Not only do these lies discourage political participation from communities, they also sow the seed of doubt in the upcoming results of the election. It sets up the narrative that led to the deadly attack on the US Capitol on January 6, 2021. It’s an outright power grab coming from people who should know better, but don’t. But we know better. We cannot allow this to happen again.

Big Tech has an obligation to ensure that the information being shared on their social media platforms or by their AI-powered chatbots around elections is accurate and truthful. In a charge being led by the loud-but-wrong Elon Musk and his takeover of X (formerly known as Twitter), platforms are abdicating their responsibility to their users and the truth. In many cases, platforms are no longer enforcing key guidelines around content moderation they themselves formerly imposed, and are gutting trust and safety teams while they’re at it.

Earlier this year, civil society groups, including The Leadership Conference’s Center for Civil Rights and Technology, urged the leaders of Big Tech corporations to protect their users against voting disinformation. Their responses (or lack thereof) were disappointing, if unsurprising.

One recent glimmer of hope is when the ire of election officials turned on Grok, X’s AI chatbot, for spitting out false and misleading information about the election. In this case, the platform actually took action. Now, it’s the bare minimum that Grok points users towards where to find accurate information; but public pressure made the platform more honest for its users. It’s an example of what advocates for a free and fair democracy can do when faced with a crisis. Without intense public pressure calling out failings, Big Tech won’t change. It’s a lesson I fear extremists – who would rather not see a fully participatory democracy – have learned, convincing platforms to back away from responsible content moderation practices.

In Congress right now, there are a few bills that would also help stymie the spread of online disinformation. The Protect Elections from Deceptive AI Act, the AI Transparency in Elections Act of 2024, and the Preparing Election Administrators for AI Act are critical to protect our elections against the threat of AI turbocharging voting disinformation. Unfortunately, there’s a lack of willpower from certain members of Congress to get these bills passed this session. The same playbook of applying intense public pressure on representatives could move the needle. But even if passed tomorrow, these safeguards wouldn’t be implemented in time to impact this year's election.

While the onus should not be on users to decipher digital disinformation, there are steps that people can take to protect themselves and our immigrant neighbors from these ugly lies. If platform users encounter something online regarding the election that causes an emotional reaction, that’s a signal to pause. Check the source of the information to see if it's legitimate. Research the information being shared to see if it's correct and if other outlets you trust are reporting on it. Log off and get a reality check by talking to people you trust to see what they think about it. It is also important to not reply to, share, retweet, or cross post disinformation. Even if you’re attempting to debunk a claim, interacting with digital disinformation can unintentionally spread it even further. Readers can visit this resource to learn more.

We must fight back together against anti-immigrant hate being spread online by bad actors. We know their playbook, and we understand their craven motivations. We will not allow hate to win.

Authors

Koustubh (K.J) Bagchi
Koustubh J. Bagchi leads The Leadership Conference’s Center for Civil Rights and Technology as vice president. The Center is a hub for advocacy, education, and research at the intersection of civil rights and technology policy, including AI and privacy, voting and platform accountability, and broadb...

Topics