Home

The Real-World Impact of Online Incitement on Palestinians and Other Vulnerable Communities

Itxaso Domínguez de Olazábal / Sep 12, 2023

Itxaso Domínguez de Olazábal is the EU Advocacy Officer at 7amleh - The Arab Center for the Advancement of Social Media, a non-profit organization dedicated to defending Palestinian digital rights.

A truck burnt in violence in Huwara, February 26, 2023. Wikimedia CC BY-SA 4.0

The power of social media to enable communication and expression cannot be underestimated. However, with this power comes responsibility, especially concerning the moderation of content that promotes violence, hatred, and discrimination, particularly in contexts of conflict and crises, such as the decades-long illegal occupation of Palestinian territory. A recent spate of violence in the northern West Bank village of Huwara permitted us to study the dangerous repercussions of under-moderating hateful social media content, focusing on Twitter (now known as X) as a case study. The research uncovers a disturbing correlation between online rhetoric and real-world harm affecting Palestinian people and other vulnerable communities worldwide, with devastating consequences.

Twitter’s Under-Moderation of Hateful Israeli Hebrew Content

Violence in the occupied Palestinian territory, particularly incidents of settler-led attacks against civilians, has reached alarming levels lately. While the number and density of Israeli settlements, constituting a violation of international law, keep growing, levels of Israeli settler violence rise in a climate of impunity. Palestinians are subject to the repression of the Israeli army and now have to face a notable increase in incidents involving Israeli settlers perpetrating acts of violence, which encompass violent threats, assaults and destruction of property.

In February of this year, violence flared between Israeli settlers and Palestinian villagers in Huwara following the shooting death of two settlers by a Palestinian gunman. Following the shooting, an Israeli settler mob stormed the village. The situation peaked on the evening of February 27th, with indiscriminate violence resulting in the destruction of homes, crops, and vehicles. Residents faced brutal attacks, including with stones, and one Palestinian individual was fatally shot.

The report by 7amleh-The Arab Center for the Advancement of Social Media, “An Analysis of the Israeli Inciteful Speech against the Village of ‘Huwara on Twitter,” focused on the role of that platform, now known as X, in spreading hateful messages that contributed to the violence. We analyzed over 15,000 Hebrew language tweets with the hashtags Huwara (#חווארה) and Wipe out Huwara (#חווארה_את_למחוק) from the beginning of the year until the end of March using a sentiment analysis algorithm. More than 80% of these tweets included harmful content that incited violence, racism, and hatred against the people of Huwara.

This horrifying incident underscores the link between online incitement and actual violence on the ground. Unfortunately, the Huwara attack has become one among many. The month following the incident, there were an average of 188 negative tweets a day, aimed at justifying the violent attack and laying the groundwork for repeated violence against Palestinians in the occupied Palestinian territory. This recurring pattern has resurfaced recently, coinciding with surges of violence in the area.

The results shed light on the dangers of under-moderation of hateful Israeli Hebrew content on social media platforms like Twitter, particularly during a time when Israeli incitement and hate speech against the Palestinian community are alarmingly common, both online as documented by our Index of Racism and Incitement Online, and offline. In both cases, hateful language is echoed by government–backed individuals and online accounts. Many of these messages clearly violate the platform’s policies against hate speech and incitement, but the lack of adequate moderation fosters an environment where extremist factions freely propagate violent and racist claims.

Twitter’s popularity in Israel and its lax approach to moderation make it a preferred platform for far-right factions to incite violence against Palestinians. Despite repeated efforts by 7amleh through our open-source monitoring platform 7or - The Palestinian Observatory of Digital Rights Violations, Twitter failed to take appropriate measures to curtail the hateful and inciting posts, and consequently, the violence manifested offline. In the case of Huwara, none of the inciting posts we identified were taken down.

Double Standards and Violations of Palestinian Rights

Even more jarringly, Palestinian and Arabic content on the platform is still being over-moderated. The focus of expert analyses has rightly been on the limitations to freedom of expression caused by content moderation policies and practices, leading to the censorship of Palestinian voices online. The report “Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021,” commissioned by Meta to the independent network Business for Social Responsibility (BSR), acknowledged that, in comparison to Israeli Hebrew content, Palestinian Arabic content and accounts are mistakenly deleted and suspended at disproportionately high rates. This has happened on all platforms.

However, less attention has been paid to the under-moderation of hateful, racist content written in Hebrew. Even though BSR affirmed that measuring under-enforcement is challenging, the reality on the ground provides evidence of the palpable danger. We mustn't wait until people are endangered to recognize the existence of this issue. All of it points to a worrying policy of discriminatory double standards regarding Palestinian and Israeli content across social media platforms, which lead to multiple violations of Palestinians’ fundamental rights and freedoms protected by international law. Over-enforcement and under-enforcement of content moderation rules are equally significant, particularly when considering how they act hand in hand as a consequence of global power asymmetries, and that they also have considerable adverse effects on the platforms’ non-users.

The Broader Ramifications

The Huwara case study should prompt a wider discussion about the impact of social media platforms' policies and practices on real-world conflicts. Twitter has become a hotbed for racist, inciting, and violent content in Hebrew against Palestinians, but the issue extends to other communities worldwide. The surge in harmful content has prompted advertisers to distance themselves from the platform. The company has decided to threaten researchers instead of addressing the problem.

Looking beyond X, concerns about the link between inflammatory online speech and real-world violence are not new. These problems are exacerbated when social media platforms' systems are poorly adapted to local languages, and companies invest insufficiently in staff proficient in those languages. For example, Meta’s limited Burmese-speaking staff failed to address the spread of hate speech, contributing to violence against the Rohingya community in Myanmar. In Ethiopia, Meta is facing a lawsuit filed in December 2022 alleging the company's failure to invest adequately in moderation resources for the country's languages, despite several warnings from trusted partners.

The challenge lies in striking a balance between restricting hate speech and incitement while ensuring freedom of expression. Even if this is frequently presented as a contradiction, it ultimately boils down to investment issues and genuine compromise. It is no secret that social media companies often allocate resources unevenly across markets. This leads to a significant disparity between countries in the Global North and the Global South. Content moderation practices are more likely to be proportionate to market size and political risk to the company, rather than any measure of genuine risk to users’ safety. The consequences are grim in crisis zones, such as Palestine and Kashmir.

The immediate outcome of such discrepancies is too few content moderators with insufficient linguistic and local expertise, particularly considering the urgent need to compensate for poor algorithmic performance in non-majority languages. Sometimes, what is missing, or sorely underfunded, is local staff working on policy who are able to guarantee meaningful multi-stakeholder engagement. Calls for violence often come in coded language, but platforms often lack sufficient hate speech lexicons and classifiers in non-English languages, and may have sparse collaboration with language communities and researchers. Indeed, one assessment of Twitter’s readiness to address trust and safety issues outside of predominantly English-language countries concluded that the company “lacks the organizational capacity in terms of staffing, functions, language, and cultural nuance to be able to operate in a global context,” and that was before Elon Musk gutted the company’s ranks.

Moreover, in the wake of layoffs at X and other platforms, these shortages might worsen with the enforcement of the EU’s Digital Services Act (DSA). Companies may be compelled to invest more resources and effort in the EU, driven by regulatory requirements and an emphasis on political events in Global North countries in 2024.

Hate crime trends often follow changes in the political arena, such as during contentious elections and following violent events. During times of crisis, companies have a responsibility to be more vigilant, identifying trends indicating potential peaks of violence. These duties inherently involve periodic human-rights risk assessments, along with subsequent mitigation and reparation measures throughout their entire value chain. All of it must be accompanied by transparency initiatives.

Conclusion

7amleh’s research, in conjunction with studies produced by civil society organizations worldwide, underscores the urgent need for social media platforms like X to take responsibility for moderating harmful content. Tech companies are responsible for both abstaining from interfering with human rights (including those of non-users) and actively safeguarding and advancing them. Thus, action is required to protect vulnerable communities from the devastating consequences of incitement, particularly in regions where violence is all too common.

This analysis adds to the ongoing debate about content moderation in the digital age, urging social media giants to prioritize the safety and well-being of citizens and address power asymmetries. Tech companies, as well as the regulators overseeing them, are predominantly based in the Global North and have shown themselves incapable of addressing the impact of their products across the Global South. Only through concerted efforts and meaningful cooperation with local civil societies can we hope to foster a safe, free and fair online environment for everyone.

Authors

Itxaso Domínguez de Olazábal
Itxaso Domínguez de Olazábal is the EU Advocacy Officer at 7amleh - The Arab Center for the Advancement of Social Media, a non-profit organisation dedicated to defending Palestinian digital rights. 7amleh's mission is to create a safe, fair, and free digital space by monitoring and documenting digit...

Topics