Home

Donate

Physical Solutions in Virtual Spaces: Challenges to Content Moderation in XR

Tammana Malik , Noah Usman / May 9, 2024

The question of what content should be moderated online is typically a polarizing one, plagued with privacy and free speech concerns, partisan politics, and differing moral and ethical standards. However, within this divisive space, terrorist content and child sexual abuse material (CSAM) are almost universally acknowledged as dangerous. While the prohibition of these two forms of content in the physical world as well as online receives bipartisan support, the proliferation of extended reality (XR) technologies – including virtual, augmented, and mixed reality – poses new challenges that existing moderation schemes likely could not have envisaged. With XR rapidly becoming an affordable fixture in many households and workplaces, the need to reevaluate current content moderation standards has become more urgent.

Moderating XR technologies poses similar challenges to making purely physical environments safe, while also reflecting all of the usual complexities of moderating speech on social media. In the physical world, people can choose who they interact with by moving around in space – simply leaving the room or closing the door, for example. With the advent of social media, users can also curate their interactions by choosing what platforms to use, what groups to join, what accounts to follow or block, and more. But in XR, these parameters break down.

Content in XR is not static or limited to text and images – the immersive nature of XR allows users to interact with virtual environments and objects, as well as other users in virtual spaces, using not just verbal but also non-verbal communication. Communication in XR, meant to mimic real-world interactions, poses a challenge to moderators due to its transitory nature. Much of the communication occurs in real time with little to no record. Given the ease with which people can interact in real time with users of different ages, genders, and other demographic characteristics in XR, the potential for the proliferation and amplification of illegal content exponentially increases. 

The proliferation of physical violence on XR platforms is subject to none of the traditional barriers, but almost all of the same effects on its victims. Harassment of virtual avatars, both verbally and physically, is a significant problem in XR spaces, and demonstrates the equivalent markers of psychological trauma, albeit without indicators of physical touch. XR also offers avenues to share CSAM that has been generated on non-XR platforms as well as opportunities to generate and share new types of CSAM. Children face several risks in XR, including being exposed to sexually explicit material, being approached and harassed by adult users, and being encouraged or coerced to engage in sexual acts. The ability to adopt or create child-like avatars presents enhanced opportunities for abusers to groom children or use virtual depictions of child sexual abuse to desensitize potential victims. Increasingly realistic deepfake images and customized fantasy characters may also be used to create sexualized representations of children – material that currently falls into a legal gray area because it doesn’t depict an “actual” child.

The ability to embody avatars through which users interact with the virtual world means that both positive and negative interactions can have greater impacts on a user’s emotional wellbeing. In addition to the immersive nature of XR technologies, the amount of biometric, or “body-based”, data aggregated from users permits physical experiences, including those that are illegal, to be experienced more realistically. Recent incidents of sexual assault in XR spaces have informed scholars and the public that, although these acts do not entail physical touch, they result in nearly identical manifestations of psychological trauma. These dangers are especially applicable to the proliferation of terrorist content and incitement to violence on XR spaces. The ease of avoiding automated detection on these platforms allows for such content to be spread more easily, given that human moderators are simply unable to process such vast amounts of data.

While most XR applications and platforms contain age restrictions, underage users can easily bypass them. Moderation within these spaces is generally community/user driven, with platform-wide standards and guidelines being established for acceptable content and conduct. These rules are enforced based on user reporting or through the presence of external moderators. However, community guidelines are generally applicable only to the “public spaces” on a platform. With many XR applications allowing for the creation of private/invite-only spaces, the need for moderation is further complicated by privacy concerns and the applicability of community rules to interactions within these spaces.

In light of the efficiency and scalability challenges posed by external moderation, applications have turned to personal moderation tools that enable users to shield themselves from unwanted behavior and disturbing content. These include the options to mute, block, or remove other users from a shared space, as well as activating a “space bubble,” or virtual barrier that prevents other avatars from coming too close. While these are effective strategies to help users control their own experiences in XR, they don’t account for manipulative grooming techniques, or the ease of access to CSAM for users who actively seek out such content. In the context of child sexual abuse, these strategies also diverge significantly from principles employed in “real-world” regulation by shifting the burden onto vulnerable users to protect themselves. 

The scope of existing laws must be updated to ensure that they explicitly prohibit both virtual and simulated instances of CSAM. Existing tort liability frameworks must also be reconfigured to ensure that perpetrators of virtual assault and harassment are held to a standard that reflects the impact such acts have on their survivors in the XR space. In the United States, Section 230, enacted as part of the Communications Decency Act, prevents users of computer services from being legally treated as publishers of information, thus protecting online communications platforms (including XR platforms) from liability for the content that users share on their platform.

In the US as well, concerns over CSAM have prompted bipartisan bills aimed at curtailing the protections afforded by Section 230. The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act) proposes to remove the blanket immunity from liability under civil and criminal CSAM laws, encouraging tech companies to take proactive steps towards combating online sexual exploitation of children. In 2023, the US Congress introduced the STOP CSAM Act, which would increase reporting requirements for digital platforms in areas relevant to child safety, and would also provide a federal civil cause of action to CSAM victims against such platforms directly. While ostensibly protecting against the proliferation of CSAM on digital platforms, the STOP CSAM and EARN IT Acts have also exacerbated concerns of online surveillance and potential infringement on First Amendment rights to discuss topics related to sexual and reproductive health more generally, as well as topics pertaining to the LGBTQ+ community.

Outside the US, many democratic countries have implemented intermediary liability laws to balance the protection of free speech with safety considerations. However, not all countries in which platforms operate have similar commitments to free expression. In many cases, platforms themselves will have to take responsibility for making decisions about these trade-offs, including about compliance with local laws. The imposition of regulations that curtail free expression could prove to be a serious hurdle to the growth and adoption of a technology that is still in its nascent stage. However, current and potential users may also be driven away by a hostile virtual environment and proliferation of hateful and violent speech.

External moderation approaches, besides having a potential chilling effect on speech, also pose challenges with respect to scalability. Because XR technologies create or replicate perceived physical environments, the amount of data aggregated in real time is far too great for content to be assessed and moderated externally on a case-by-case basis. In order to mitigate these risks, the deployment of internal avatar security agents may provide an opportunity to counter the proliferation of illegal content in real time in certain contexts.

Perhaps the most effective strategy to counter the proliferation of CSAM is by creating “safety by design” models and through the use of increasingly accurate automated detection software, which can also provide real-time feedback and deterrence to perpetrators. Governmental bodies have begun developing such programs with respect to the conventional social media space, yet their application to XR is yet to be determined. While automated detection may be aided by recent advancements in artificial intelligence, the ability of users to create new digital personas and visual and spatial signals to encode information might allow users to secretly translate hateful or explicit messages, much like how users on text-based platforms can simply add diacritical marks to or change the spellings of banned terms to evade moderators. The proliferation of generative AI also presents new issues in that the creation of illicit material can potentially occur at an even faster rate than it can be removed from digital platforms, including XR spaces. In order to remove the onus on users to ensure their safety, a reexamination of how liability is assigned through such applications may prove useful.

Despite the increasing proliferation of XR technologies, current legislation intended to regulate illegal content can barely be applied beyond the traditional realm of social media. A radical reimagining of content moderation mechanisms and regulations is urgently needed if XR platforms are to be fully trusted as a safe space for all.

Authors

Tammana Malik
Tammana Malik is an LL.M. candidate at UCLA School of Law, specializing in Media, Entertainment, and Technology Law. She is interested in the regulation of disruptive technologies, and engages with issues at the intersection of technology and intellectual property law. Tammana holds a B.A.LL.B. (Hon...
Noah Usman
Noah Usman is a second-year JD candidate at UCLA School of Law, expecting to graduate in 2025. Specializing in International and Comparative Law, he is also passionate about privacy-related issues and analyzing trends in technology legislation. Based in Orange County, California, Noah holds a B.A. i...

Topics