Skip to content

The Sunday Show: Trust and Safety in Virtual Worlds

Subscribe to the Tech Policy Press podcast via your favorite podcast service.

This week’s podcast has two segments– and a warning: both feature discussion of sexual assault and rape. We’re going to hear from Vice News reporter Carter Sherman about her story from last week titled Woman Says She Was ‘Virtually Gang-Raped’ in Facebook’s Metaverse, detailing an incident that represents an early test of Facebook’s ability create a safe environment in its ‘metaverse.’

Then, I speak with Dr. Carly Kocurek, a cultural historian specializing in the study of new media technologies and video gaming, about a story from 1993– A Rape in Cyberspace, written by Julian Dibbell– that presaged some of the questions we’re faced with today when thinking about safety and abuse in virtual environments. 

In August 2020, Facebook released a video featuring an avatar named Morgan explaining safety features in Horizons, its 3D virtual environment that users can explore in the Oculus headset. 

The company, which has hitched its future on virtual reality, is clearly focused on how to make these environments safe. 

Last November, Financial Times reporter Hanna Murphy reported on an internal Facebook memo from March that year outlining the company’s plans to address safety in its virtual reality environment, which it calls the ‘metaverse.’ 

The memo was written by Andrew Bosworth, then the executive in charge of Facebook’s push into virtual reality and now the chief technology officer of the entire company, which has been renamed Meta to emphasize the importance it believes virtual reality will play in its future.

In the memo, Bosworth said he wants virtual worlds to have “almost Disney levels of safety,” but he also acknowledged that moderation “at any meaningful scale is practically impossible.” Murphy reported that while Facebook it was exploring how best to use artificial intelligence in its social VR environment, Horizon Worlds, that it was not built yet. Bosworth suggested that in VR the company should pursue quote a stronger bias towards enforcement along some sort of spectrum of warning, successively longer suspensions, and ultimately expulsion from multi-user spaces unquote.

In September 2021, Bosworth joined Nick Clegg, Facebook Vice President of Global Affairs, to release a memo on Building the Metaverse Responsibly, announcing a $50 million fund for research into how to develop virtual reality products responsibly. The company said that through the program it will be partnering with organizations like Sesame Workshop and Women in Immersive Tech.

These are all laudable efforts. But Bosworth, who is a 15-year veteran of Facebook and is known to be one of Zuckerberg’s closest confidantes, is also known for taking a harder line on the bounds of the company’s responsibilities when it comes to content moderation. 

Perhaps Bosworth’s best known written artifact is the 2016 internal memo known as “The Ugly”, which taken as text alone remains a totemic example of Silicon Valley callousness. Bosworth claimed the memo– revealed by BuzzFeed in 2018– was meant to push the internal debate about the company’s role in the world.

Bosworth claimed the memo– revealed by BuzzFeed in 2018– was meant to push the internal debate about the company’s role in the world. Providing a rationale for Facebook’s rapacious growth in service to its mission to connect people, Bosworth wrote in his provocation that “maybe it costs a life by exposing someone to bullies.” Now, the company is understood to have played a role in multiple atrocities, such as the genocide of the Rohingya in Myanmar. 

In December 2021, Axios Chief Technology Correspondent Ina Fried interviewed Bosworth, and asked how he plans to make the metaverse any safer than Facebook. When Fried asks Bosworth how the company will avoid “terrorist planning going on or just misinformation from being spread” in the ‘metaverse’– Bosworth reframes the question:

Andrew Bosworth: I think it’s not even really a metaverse issue. It’s an issue that we face today with the tools that we have, like WhatsApp and Messenger. How do we want to balance our ability to communicate privately, private from governments, private from corporations, versus, ‘I want to make sure that nobody’s having a conversation that I don’t like, and therefore we should sacrifice some of that privacy.’

Ina Fried: If you look at January 6th, a lot of the conversations leading up to January 6th happened online. A good number of them happened on your platform. Do you guys feel you did everything you could to stop it? Or is it more, ‘this is an inevitable trade off of bringing the world together?’

Andrew Bosworth: When you bring the internet together, you bring together people who otherwise wouldn’t find themselves, including people who are in marginalized or at risk communities. How do you do that without also bringing together communities that you’d rather not bring together? People who have violent ideologies. And I don’t think it’s a solvable problem. Those things come hand in hand.

I can’t help but wonder- when push comes to shove and growth is on the line, which version of Facebook will reveal itself to be in control of the Metaverse? The one portrayed in the neatly edited animations promising Disney-like safety or reassuring prose of recent memos from Bosworth and Clegg, or the one espoused by Bosworth in The Ugly? Where will the company draw the line on how much harm is acceptable in the environments it builds?

An early version of this podcast included a mistaken reference to “A Rape In Cyberspace” as a work of fiction; it is in fact a nonfiction account.

.