Home

Donate

Platform Enabled Structural Harms & The Limits of the Current Accountability Regime

Kebene Wodajo, Catherine McDonald / Jun 19, 2024

This essay is part of a symposium on the promise and perils of human rights for governing digital platforms. Read more from the series here; new posts will appear between June 18 - June 30, 2024.

September 7, 2017: Hundreds of Rohingya people crossing Bangladesh's border as they flee from Myanmar after crossing the Nuf River in Taknuf, Bangladesh. Mamanur Rashid/Shutterstock

The advance of digital technologies, especially AI powered platforms, raises many questions pertaining to the ethics of technology and power relations. Examples include the integration of generative AI into platform technologies, including search engines, as well as the use of data driven technologies such as social media platforms for propaganda, disinformation and misinformation. Yet these debates disproportionately focus on their implications for freedom of expression, autonomy, self-determination, and privacy, or even those for business accountability. While these are important questions, we rarely inquire as to what it means when intended and unintended engagements with and via these platforms produce murder, massacres, perpetual marginalization, invisibility, and at times, hyper-visibility. This blogpost seeks to address these underlying questions by asking, what is producing and sustaining platform enabled violence? And why is it so difficult to regulate?

In his 1963 speech following the bombing in Alabama, Dr. Martin Luther King noted:

They [the victims] say to us that we must be concerned not merely about who murdered them, but about the system, the way of life, the philosophy which produced the murderers.

With this speech and as highlighted by the philosopher Sally Haslanger, Dr. King conveyed that some forms of oppression, repression, and violence run deeper than causally attributable blame to an individual wrongdoer. Instead, such injustices emanate from social structures and are embedded within institutional systems that sustain the vulnerability of groups and individuals to various forms of violence. Hence, interrogating the structures that hold different social systems together while enabling the replication of deep-seated social injustice is quintessential. Interrogating these structures requires taking into account the entirety of the social and institutional system of which the individual victim is a part. We seek to highlight the structural injustices that emanate from platforms as a result of social and institutional structures.

Let's begin with two examples. The widely discussed and condemned Rohingya genocide remains a stark illustration of how violence can stem from poorly regulated, engagement-driven social media platforms. Yet what gave this disinformation on social media the power to translate into real-time violence that claimed the lives of hundreds? Consider another ongoing example: Social media platforms have become dangerous spaces for children and teens, with risks including, but not limited to, mental health issues, suicide, and self-harm. As a result, there are currently hundreds of lawsuits in the US against social media platforms such as TikTok, Facebook, Snapchat, Instagram, YouTube, and their parent companies including Google, Alphabet, Meta, Snap, and ByteDance. How did seemingly innocuous social media engagement transform into a lethal force against vulnerable sections of the community, such as children and teens?

In their defense against these lawsuits, social media platforms argued that they are immune by law from such claims through Section 230 of the 1996 Communications Decency Act (CDA), which has long protected internet companies that publish third-party content online from legal liability in many circumstances.Yet, as the quote by Dr. King reminds us, if we look at the system enabling this violence, rather than at individual actors only, the above two examples point to structural and institutional injustice. The vulnerability of marginalized or socio-economically and politically underprivileged groups often makes them victims of platform-enabled violence. In the case of the Rohingya genocide, Myanmar’s Rohingya minority was targeted as a result of pre-existing social structures, such as long standing religious and ethnic marginalization, amounting to structural injustice beyond mere violations of legally protected rights. Similarly, the vulnerability of teenagers to platform manipulation—to the extent of risking their lives—can be attributed to institutional structures when we look beyond the responsibility of one company or other actors. This includes economic and legal structures that allow the conversion of human attention and interaction into profit, sustaining an attention-grabbing, engagement-based business model.

Furthermore, a legal structure that sustains such abuse by shielding powerful actors from liability, such as Section 230 of the CDA, contributes to this issue. Hence, this is not merely a violation of legally protected rights but an institutionally enabled injustice against vulnerable sections of the community, such as children and teenagers. From this, we can deduce that platform-enabled violence embodies deeply rooted structural and institutional injustices. Therefore, responses to these injustices must go beyond a focus on identifying and blaming a single wrongdoer for human rights violations. With this understanding, we will turn to key limitations in the current governance regime.

Governing the multifaceted risks that emanate from the use of data driven technologies, including platforms, is muddied by two factors. First, the multiplicity of actors and competing norms in contemporary Internet governance, and second, the state-centric foundation of international human rights law. While these governance challenges pose a hurdle in addressing broader human rights issues, they pose even deeper barriers in the context of structural and institutional injustice, as will be laid out in the following paragraphs.

Regarding the first governance challenge – the multiplicity of actors – legal scholar Julie Cohen argues that technologies, including artifacts, are contested networked spaces between multiple market and nonmarket actors. The interactions between social, political, and economic actors – i.e., citizens/ordinary technology users, the government and the private sector - produce a technologically mediated human experience. Moreover, the governance of this networked space is stretched between competing norms, arguably liberal, individualistic and universalistic values on the one hand and sovereigntist, state-centered and territorial values on the other hand. Hence, when it comes to the attribution of responsibility for unjust human experience in this networked socio-technical system, it is crucial that the unique nature of the digital space is taken into account.

The second governance challenge – state-centrism – is rooted in traditional international law that lays the foundation for international human rights law. As opposed to the networked multi-actor-supported nature of data driven technologies, accountability regimes for the protection of fundamental rights center around the duty of the state to respect, protect and fulfill (OHCHR, 2011). This overlooks two contemporary challenges in a technological society: The scenario in which the state lacks the capacity to fulfill its international human rights duty, and the scenario in which the state lacks the political will to live up to its human rights duty to protect, respect and fulfill. Such governance challenges are particularly prevalent in the age of data driven technologies partly due to the aforementioned multiplicity of actors in this space –- which defies the regulatory capacity of the state – and partly because of geopolitical competition and norm conflict that disincentivize governments’ political will to uphold to their human rights duty. Business and human rights (BHR) aims to fill this gap by prescribing private actors’ the human rights responsibility to ‘do no harm.’

Although the BHR discourse, and particularly the United Nations Guiding Principles (UNGPs), have made a positive attempt to break away from traditional state-centrism, they, too, have their own limits. The key limitation is that by refocusing on the state as the primary duty bearer and by limiting the corporate responsibility to ‘do no harm’, it simply turns the problem into a solution. Such an approach, therefore, only seeks to accentuate and exacerbate the problem, as many states face a lack of resources to adequately enforce their roles and duties, particularly within the digital sphere. While the above discussed governance gaps pose a challenge to addressing general human rights risks related to the use of data-driven technologies, the challenge is even more complex in cases of structural and institutional injustice.

In light of the aforementioned accountability gap, we propose going beyond the traditional accountability regime to account for structural and institutional injustice when considering the ethics of technology and questions of power. Platform enabled violence is arguably propagated not only by the actors involved but just as much by unjust social and institutional structures. As such, the attribution of blame to specific actors is usually insufficient and less helpful in addressing the root problem. Although the liability model and complementary legal frameworks play an important role in assigning responsibility by establishing guilt or fault for a harm, they lack recognition of the role of participating agents in structural social processes that have unjust outcomes. Furthermore, on other occasions, they are part of the institutional structure enabling the injustice, such as in the case of Section 230 of the CDA.

Alternative models of responsibility may serve as inspiration for addressing the governance gap at hand. In particular, one could think towards collective and shared political responsibility through which different actors and stakeholders join forces in challenging the social and institutional structures enabling such injustice. One way to realize collective and shared political responsibility could be to leverage principles such as solidarity to bring different stakeholders—the private sector, public sector, civil society organizations, impacted communities, and scholars—together to influence legislation and policymaking related to platform and data economy governance. Furthermore, by considering the distinct and dispersed responsibilities of the various actors involved, efforts can be made to address the gap in the traditional accountability regime.

Authors

Kebene Wodajo
Dr. Kebene Wodajo is a lecturer and senior scientific assistant at Ethics, Technology & Society, ETH Zurich, with a research focus on structural & systemic injustices in technological societies.
Catherine McDonald
Catherine McDonald is a PhD candidate at the University of St. Gallen and a research assistant at the Institute for Business Ethics.

Topics