Home

The Power of Spam Policies: Shedding Light on Platforms as Commercial Enterprises in the Generative AI Era

Dominique Carlon / Jun 24, 2024

This essay is part of a symposium on the promise and perils of human rights for governing digital platforms. Read more from the series here; new posts will appear between June 18 - June 30, 2024.

The rise of generative artificial intelligence (AI) tools has transformed how content is created and disseminated on online platforms. As people continue to experiment with new ways of generating content — from repurposing ChatGPT responses to creating AI stickers and bots — platforms are grappling with the task of managing an influx of low-quality and problematic AI content while also navigating their own positions within the AI attention economy market.

Many platforms defer to human rights principles to legitimize their content moderation decisions (or lack thereof), which is premised on the narrative that platforms adhere to freedom of expression, only intervening to a limited extent to remove egregious and harmful content in ways that are proportional and balanced. This narrative, however, masks their far less morally rousing reality as private commercial enterprises driven by the goal of attracting and retaining user attention, profit, and relevance. This reality has manifested clearly in the current race for a prime position in the AI market, where platforms are developing their own AI features, or forming lucrative deals with AI companies, all while contributing to the generation and distribution of content they claim to be circumventing and contradicting the platform values they pledge to adhere to.

In examining the deluge of generative AI content, there is a renewed opportunity to scrutinize the active role of platforms as curators of content and attention, along with the commercial ideologies that underpin content moderation. One way of gaining insight into this reality is by scrutinizing the application of spam policies which afford platforms an extensive and largely unchecked power to curate and remove content based upon their predominantly commercial priorities.

AI as hot commodities

Platforms have been quick to embrace the commercial opportunities presented by generative AI. One of the first to jump on the AI bandwagon was Snap, the company behind the popular messaging app Snapchat, which in April 2023 launched a chatbot called My AI. This bot allowed its largely youthful user base to exchange messages with an omnipresent and customizable digital companion with little apparent investigation by Snap into the potential repercussions of its release. 

More recently, Meta announced plans to release a sandbox for creating personalized AIs, adding to their collection of AI influencer bots and other AI features in beta. Despite previously acknowledging the risks of synthetic media and potential issues with representation and cultural appropriation, Meta had little hesitation in releasing ‘build your own’ AI bots and other products such as personalized stickers generated through text prompts using their own large language model.

Predictably, the AI stickers on Facebook and WhatsApp have faced major criticism for perpetuating harmful stereotypes and representations. For example, on WhatsApp it was reported that searches for ‘Palestinian boy’ generates a sticker of a child with a firearm, and groups in Germany drew condemnation for creating Nazi themed stickers. Facebook’s stickers in beta were also used to generate naked illustrations of Canadian prime minister Justin Trudeau.

The decision to rapidly release AI features, functions, and bots onto platforms is fundamentally a commercial one driven by attracting attention and revenue, yet is also one that does not have much consideration for the wellbeing of the human user or adhere to human rights principles at all. In fact, the premature deployment of AI products by these platforms contributes to the proliferation of problematic and harmful content, resulting in the need for even more moderation efforts. When this burden falls largely upon overworked and underpaid human moderators the validity of platforms legitimizing their content moderation stance upon human rights principles becomes even more contentious.

Curatorial power in spam policies

A focus on spam policies and how they are implemented offers a reality check on what platform companies are and what they do. Platform companies design systems that continuously distribute attention. Through their algorithmic affordances, technical infrastructure, and the various ways they reward content while demoting content they deem ‘deviant,’ the ultimate goal is to keep people coming back to the platform.

Science and Technology Studies scholar Finn Brunton has defined spam as “the use of information technology infrastructure to exploit existing aggregations of human attention.” If that sounds like it could capture virtually everything on the contemporary internet, dominated as it is by platforms, that is because in a sense it could. The decisions of what amounts to spam affords platforms the ability to control and curate content that is permissible based upon their own interests and priorities that can be far removed from human rights considerations.

The wording of spam policies are frequently vague, open to interpretation, and flexible. Take for instance Meta’s rationale for its spam policy, which is that spam “content creates a negative user experience, detracts from people's ability to engage authentically in online communities and can threaten the security, stability and usability of our [Meta’s] services.”

Spam policy offers a world where we can scrutinize limitations in platforms' self-proclaimed narratives. For instance, at the end of 2023 Human Rights Watch released a report on platform censorship of Palestine-related content, noting that the most recurring policy invoked by Instagram and Facebook to remove such content was their spam policy. To further solidify the matter, Human Rights Watch also noted that their own posts that were seeking examples of online censorship were also flagged by the platforms as spam.

The Human Rights Watch report identified that one likely reason for erroneous applications of platform spam policies lay in the heavy reliance on automated tools to remove content and translate language. However, like AI generated stickers, problematic use and failures in automated technology does not remove platforms from their active culpability. By contrast, it highlights the platform's decision to utilize systems that present technological bias and not to prioritize investment and testing in this area.

For academics and policymakers focused upon the contours of free speech online and who allude to weighty juridical principles when describing content moderation processes, to read platform companies’ spam policies is to enter into a different world. In ‘spam policy land’ platform companies are not our “New Governors,” balancing human rights and the public interest. Instead, they are commercial companies trying to ensure their relevance, offering a service and setting terms of engagement.

Commercial platforms in an AI future

When platforms defer to human rights principles to legitimize their content moderation stance, they are crafting a narrative that shields them from their active role as curators of content who make value based decisions that are predominantly shaped by commercial motivations. By directing attention to platforms' ever evolving spam policies, and scrutinizing how they are interpreted and implemented, we can gain insight into how platforms find ways to moderate content that go far beyond (and at times in conflict with) their proposed values premised upon human rights principles.

In a platform landscape where AI generated content and features will continue to evolve and transform, it is important to scrutinize the logics that guide platforms, namely the centrality of user attention amidst the AI frenzy. When discussing appropriate moderation of generative AI content, and the implication of AI and automated features within platforms, we should at minimum reject the pretense that the commercial incentives, business structures, politics and values of platforms somehow come secondary to an overarching human rights framework of content moderation. In accepting this reality we can turn our attention to the often overlooked, but ultimately influential spam policies of platforms to scrutinize how they are being implemented, and what values and priorities underpin these decisions.

Authors

Dominique Carlon
Dominique Carlon is a communications scholar at the Digital Media Research Centre, Queensland University of Technology, researching the relationship between humans and bots and the role of automation in digital platforms.

Topics