Home

Donate

Message to US States: Don’t Forget the Fundamentals of Fighting Online Hate and Antisemitism

Jordan Kraemer / Feb 11, 2025

Jordan Kraemer is Director of Research at ADL's Center for Technology & Society.

With the proliferation of generative AI tools since the launch of OpenAI’s ChatGPT in 2022, the tech industry, researchers, and policymakers have grappled with how to regulate AI and ensure fairness in its implementation. These efforts often reflect the sentiment that regulation came too late for social media companies, which have grown massive–and massively profitable–while enabling a raft of harms, including widespread hate, harassment, and false information. This emphasis stems, too, from the pessimism AI’s own founders have seeded, stoking dystopian fears about out-of-control machine intelligence while betting on their own tools’ profitability.

In their upcoming legislative sessions, state governments must not fall into this all-too-familiar trap of governing by headlines. The fundamentals of tech regulation still matter, as do underlying issues that remain unresolved, such as those incentivizing rampant hate and antisemitism. Yes, states should pass regulations that require audits of automated systems, including large language model (LLM)-based AI tools and other uses of algorithms. These models are trained on massive datasets of text that risk reproducing existing forms of injustice, such as racial and gender bias. Many of their applications are already replicating the harms of earlier automated tools, which propagated bias in healthcare risk prediction, criminal justice (to predict crime or recidivism rates), financial risk assessment (such as mortgages and loans), hiring, and even beauty contests. And yes, they should pass laws to protect teens, primarily focusing on data privacy (ideally for everyone, not just youth).

But lawmakers should not forget three fundamental areas for regulation that have yet to be addressed at the federal level: support for victims of hate and harassment, requiring social media transparency, and stopping misguided legislation against moderating hateful and harassing content. Pursuing these policies is necessary to stem antisemitic harassment and anti-Israel bias that have surged online (and offline) over the past year.

Support for victims of hate

Hate and harassment remain an entrenched feature of life online, disproportionately affecting Jews, women, people of color, people with disabilities, and LGBTQ+ people (especially transgender people). Members of marginalized groups are far more likely to be harassed for their identity than others: ADL’s most recent survey of online hate and harassment found that transgender people experienced some of the highest rates of online harassment of any group (63% in the past year, compared with 37% on average); 45% of people with disabilities were harassed compared with 36% of non-disabled). Members of marginalized groups are also much more likely to be harassed for their identity: among those harassed, 34% of Jews and 49% of Muslims were harassed because of their religion (compared with 18% overall), 29% of women were targeted for their gender (compared to 22% overall), and 46% of Black/African Americans for their race or ethnicity (compared with 24% overall).

But support for targets remains feeble to non-existent. Most platforms do not act on user reports of hate or harassment, such as antisemitic conspiracy theories or the term “Zionist” when used as a slur, preferring automated systems that cannot detect context or identify ongoing harassment campaigns. Victims of online hate, such as online hate crimes or bias incidents, could be protected if lawmakers add “safe leave” laws when they update FMLA and other provisions for paid leave.

States must also strengthen their anti-doxxing and swatting laws. Doxxing and swatting are the tools harassers rely on to instill the greatest fear in targets. Doxxing (publishing someone’s private identifying information) can lead to massive and lengthy harassment campaigns that may include harassment online, over the phone, and in person. Targets must often make fundamental changes to their lives, giving up jobs and homes and causing major disruption to their families.

Doxxing also raises the further specter of swatting (calling in a false police report to send an emergency law enforcement team to someone’s house). At its most dangerous, swatting can lead to the target, or a family member, getting killed. In one particularly notable series of swatting incidents, the FBI identified one individual known to have called in bomb threats and swatting attempts at Jewish facilities, including at least twenty-five synagogues in thirteen states between July 2023 and August 2023.

Although swatting itself is rare (only three percent of respondents experienced it in the prior year), fear of being swatted (or doxxed) causes many targets of harassment to withdraw from online spaces. Driving marginalized people offline is a primary way that identity-based harassment serves to reinforce the existing social order, chilling the speech (and social participation) of targeted groups.

Social media transparency

Transparency legislation requiring social media companies to publish data on their rules enforcement faced a setback when a Ninth Circuit court ruled California’s social media transparency law largely unconstitutional in September 2024. California’s law was considered to have overstepped regulations on commercial speech by requiring companies to define and publish metrics on certain types of content, so they ruled it subject to a “strict scrutiny” legal standard. But mandated transparency still has a key role to play in protecting consumers from potentially harmful social media, where they may experience hate or abuse.

Other state legislatures, meanwhile, should pursue amended laws that do not run afoul of these concerns or at least allow for another circuit to reconsider the types of speech that may be at issue when considering regulating platform transparency. Mandating additional disclosures from social media companies should garner broad bipartisan support since both the left and the right can benefit from learning more about what is going on behind social media algorithms.

No “content-neutral” requirements

By preventing tech companies from moderating harmful content, some states, like Florida and Texas, are taking the opposite tack to preventing online hate and harassment. These laws are intended to protect conservative speech. For example, the Texas law would prohibit platforms from moderating content based on the poster's viewpoint. These provisions could require platforms to allow antisemitic content, particularly under the Texas law. While there are enough outstanding questions with the laws that the Supreme Court remanded them to lower courts last year, one point of discussion during the Supreme Court oral argument was whether a website that blocked antisemitic content could still allow “pro-semitic” content.

To advance these legislative goals, states must also prioritize research funding. Much of what occurs on social media remains poorly understood, mainly because tech companies provide little data access for researchers. Some, such as X and Reddit, have been moving away from such access altogether. There are also not enough opportunities for researchers to conduct the kind of large-scale analysis (quantitative and qualitative) that can drive evidence-based policymaking. States must fund higher education and public institutions to ensure high-quality research to guide policy into a new era.

Authors

Jordan Kraemer
Jordan Kraemer, PhD, is Director of Research at ADL’s Center for Technology & Society and an anthropologist of emerging media. As a 501(c)(3) nonprofit, ADL takes no position in support of or in opposition to any candidate for elected office. The views expressed here do not necessarily represent the...

Related

Tech Policy Across the State Governance Stack

Topics