Home

Extended Reality and the Law

Justin Hendrix, Rebecca Rand / Jul 9, 2023

Audio of this conversation is available via your favorite podcast service.

Tomorrow's virtual worlds will be governed, at least at first, by today's legal and regulatory regimes. How will privacy law, torts, IP, or even criminal law apply in 'extended reality' (XR)?

Drawing from the discussion at a conference hosted earlier this year at Stanford University called "Existing Law and Extended Reality," this episode asks what challenges will emerge from human behavior and interaction-- with one another and with technology-- inside XR experiences, and what choices governments and tech companies will face in addressing those challenges.

This episode of The Sunday Show was produced by Tech Policy Press audio and reporting intern Rebecca Rand, and features the voices of experts such as Brittan Heller (the organizer of the Stanford conference), Mary Anne Franks, Kent Bye, Jameson Spivack, Joseph Palmer, Eugene Volokh, Amie Stepanovich, Susan Aaronson, Florence G'Sell, and Avi Bar Zeev.

What follows is a lightly edited transcript of the episode.

Justin Hendrix:

Good morning. I'm Justin Hendrix, editor of Tech Policy Press, a nonprofit media venture at the intersection of technology and democracy. Welcome to The Sunday Show. We have a bit of a new format today, and before we start, I want to introduce you to a new voice, Rebecca.

Rebecca Rand:

Hi, I'm Rebecca Rand. I'm a science journalist and Tech Policy Press's audio intern this summer.

Justin Hendrix:

I am so pleased that Rebecca is joining us this summer. She is a master's candidate at the CUNY Newmark Graduate School of Journalism.

Rebecca Rand:

Where we are now recording.

Justin Hendrix:

This is a tremendous upgrade from my basement where I normally do this podcast.

Rebecca Rand:

Right? All those delicious basement smells.

Justin Hendrix:

100%. Cat hair and fumes from the boiler.

Rebecca Rand:

Cat hair is actually really good for dampening echo.

Justin Hendrix:

Okay, kick it.

Rebecca Rand:

All right. Well this week, so I have some questions for you, Justin, about virtual reality, VR.

Justin Hendrix:

XR.

Rebecca Rand:

What's that?

Justin Hendrix:

XR, it's extended reality. It's an umbrella term includes VR of course, also AR, augmented reality, mixed reality, we could throw in there as well, other stuff on that continuum from real to virtual.

Rebecca Rand:

Extended reality. All right, got it. So anyway, Apple's just released this new headset.

Justin Hendrix:

The Vision Pro.

Rebecca Rand:

Right? Well, the last time I heard about VR, excuse me, XR. People were basically making fun of Mark Zuckerberg and the failure of his Metaverse. So now Apple's taking the same gamble and it's like why. Is this augmented reality stuff ever really going to take over society the way Zuck hopes it will?

Justin Hendrix:

Look, I get the skepticism. We've been through multiple rounds of VR hype and bust over the last three decades, starting in the '90s, certainly in 2015, 2016, and years after, more recently with Mark Zuckerberg and his legless avatars in the Metaverse.

Rebecca Rand:

Where are my legs?

Justin Hendrix:

The reality is that the devices themselves continue to advance. There's nothing in the physics that means that we can't get to a point where perhaps they are far more advanced than they are at the moment. And the cost of the computer, the cost of the sensor, the cost of the graphic processing unit, all of it continues to go down. So even when you look at a device like what Apple is produced, which they're selling for $3,500, too rich for my blood, eventually that will be an affordable headset for many people.

Rebecca Rand:

I see.

Justin Hendrix:

That doesn't necessarily mean people are going to use it. And so we'll see whether people are willing to put these things on their head and enter into a variety of different applications that folks imagine. I still suspect that the place where a lot of folks are going to start out with XR is in games and in the office.

Rebecca Rand:

Gotcha.

Justin Hendrix:

But one of the people who has been writing about exactly this question lately is another thinker in this space, Brittan Heller.

Brittan Heller:

Hi there. My name is Brittan Heller. I am a visiting scholar at Yale ISP and an affiliate at the Stanford Cyber Policy Center.

Justin Hendrix:

So earlier this year, Brittan hosted this conference at Stanford about extended reality and she made a good point. It's actually artificial intelligence, generative AI like ChatGPT, that could actually be the thing that kicks the so-called metaverse into gear.

Rebecca Rand:

Say more about that.

Justin Hendrix:

Well, besides the cost and form factor of headsets, one of the big barriers according to Brittan is that it's just not easy for regular people or even programmers to make content for the Metaverse. So right now, that world is still a bit sparse.

Rebecca Rand:

The really embarrassing review about the Metaverse in the New York Times was called "My Sad, Lonely, Expensive Adventures and Zuckerberg's VR."

Justin Hendrix:

With no legs. And that kind of 3D interactive environment requires really specialized knowledge to create. Imagine if someone invented YouTube before phones had cameras, or before point and shoots could take video, or before Apple created iMovie. Only very particular people with the right gear and programs, and probably some years of training could make content and there wouldn't be much of it. But generative AI, the same kind of tech that allows you to ask it to make a picture of a dog wearing a hat, riding a bicycle, that same kind of program could be used to render 3D objects in spaces for the metaverse. And Brittan Heller says this is going to be huge.

Brittan Heller:

Generative content is going to change virtual worlds. I give it three to six months when we see generative content really merging with virtual worlds and creating an easy way for a no-code or low-code solution to create these environments, to create objects in the environments, to create the different ways that we can dress ourselves and interact with objects.

Justin Hendrix:

So she thinks it's time that policy people start taking XR in the metaverse seriously, because even if it's perhaps a lonely expensive world now, in some ways, it won't be for long.

Rebecca Rand:

I have to say I'm scared for what that world looks like because AI image generators have spat out some really just cursed looking stuff, and I'm not sure if I'm ready to see that in three dimensions.

Justin Hendrix:

A lot of folks, of course, are freaked out about AI. Perhaps some of them want us to freak out. They're essentially sowing fear in order to make their products appear more powerful than they really are. But either way, I think at this point it's 2023 and some skepticism about tech is appropriate. So Brittan is perhaps ahead of some folks. She's a legal scholar who works in new technology. One of the reasons she told me she hosted this conference earlier this year is because the metaverse is also this new venue that our current laws just aren't ready for. So she felt a bit at sea as a legal scholar.

Brittan Heller:

There were so many questions about how do we govern virtual worlds. This conference was about sharing some of what we know about augmented reality, virtual reality, mixed reality, and figuring out what we know about how they impact our bodies and minds and how the law deals with these new mediums.

Rebecca Rand:

Got it. So I guess I have to ask an obvious question here, which is, why regulate the metaverse at all? I mean, there are laws about what kinds of things you can and can't share online, child abuse imagery being an obvious one. Is Brittan Heller saying that those laws aren't enough to keep the metaverse safe?

Justin Hendrix:

There are some regulations about the digital world, like what you mentioned, and of course there are laws that apply in any world, the virtual or the physical. But the metaverse is different from the internet today, and it could throw a few new wrenches in our legal system. And I wanted to know from Brittan what she thought those wrenches were. From your perspective, what's new here?

Brittan Heller:

For me, the biggest novel challenge that comes from XR is the way that this technology interacts with us physically and mentally.

Justin Hendrix:

She told me this is pretty distinct from what we do on our phones or computers.

Brittan Heller:

It's actually quite different than the way that your brain on the internet functions because extended reality is actually a spatial medium. And we move it into a forum where your brain interprets it as real, your body feels things as real. And it's the way that cognition interacts with computing that make this a little bit different.

Rebecca Rand:

So she's saying it's about how our brains take in this virtual reality that makes it a whole new ballgame for the law.

Justin Hendrix:

And the thing she says is different is that once you put your embodied avatar in the metaverse, it starts becoming less of a question of what content you share online, like pictures or comments, and more about what you do or what's done to you.

Brittan Heller:

I place more emphasis on conduct or behavior because that is really the distinguishing factor.

Rebecca Rand:

So this is becoming way less of a question about speech and images. And it's more treating the metaverse as an actual place where people's actions could have consequences.

Justin Hendrix:

Exactly. And Brittan pointed out that the technology is advancing to make the metaverse feel more and more real to us, especially when we interact with other people.

Brittan Heller:

When you're looking at a virtual world and potentially wearing haptics, you may actually be able to feel what it feels like when somebody virtually touches you.

Rebecca Rand:

Got it. And because it's the internet, I can imagine it won't take long for people to start behaving badly towards one another.

Justin Hendrix:

It's already happening.

Rebecca Rand:

But is bad behavior on an XR platform really the same as bad behavior in real life? It's never going to cause actual bodily harm, right?

Justin Hendrix:

Well, one of Brittan's big points is that in many cases our brains can't really tell the difference.

Brittan Heller:

Your mind processes what happens in XR as being a real experience, and your mind and your limbic system, and your heart rate, and your pupil dilation, and your skin moisture, everything in your body is reacting like this is happening to you.

Justin Hendrix:

So crimes that might happen to us in the metaverse will feel real too.

Brittan Heller:

If the person sitting next to you is either screaming at you, or trying to touch you inappropriately, or committing acts of violence, why do we treat that differently than if it's actually happening in your physical environment?

Justin Hendrix:

So at this conference about XR at Stanford, there was a panel about how harms in the metaverse could actually be real harms. One of the panelists was Dr. Mary Anne Franks, a law professor from the George Washington University (and President and Legislative & Tech Policy Director, Cyber Civil Rights Initiative.) And her biggest concern was about how violence against women would play out in virtual worlds.

Rebecca Rand:

Is that a thing that's already happening?

Justin Hendrix:

It definitely is, and it definitely has been happening for some time. Here's Dr. Franks.

Dr. Mary Anne Franks:

One of the first things we heard about as the metaverse began encroaching upon everybody's reality are allegations of sexual harassment. So one of the first things that apparently some users have done in virtual reality is try to figure out ways to make women feel as uncomfortable as possible. But that is a tendency that we see with every technology when it emerges.

Rebecca Rand:

Yeah, I can't say I'm surprised. I've heard how women are treated in online gaming, for instance.

Justin Hendrix:

Absolutely. And one big problem is that law enforcement already doesn't really know what to do about online sexual harassment, even on regular old social media or in games when say doxing, or rape threats happen. And another thing Dr. Franks talks about is that tech companies often don't design these virtual spaces with women or sexual minorities in mind.

Dr. Mary Anne Franks:

Those are failing in some ways to recognize that this was going to be a problem. That is when Meta was told about this problem, this is a big deal, this person was virtually groped Meta's responses, "Oh, this person should have activated this shield that is possible." As opposed to making it a default, as opposed to making it something that was programmed into the design of the space and the experience. And there's really at this point no excuse for it. There was a case back in 2016, virtual reality game where someone who had presented as a woman had explained that she had felt not only all the wonderful experiences of this kind of virtual reality in the game, but that as soon as she was sexually assaulted in this game, of course, that also felt very real.

Justin Hendrix:

For lots of scholars that focus on XR, there's something about this realness. Researchers call it 'verisimilitude,' which basically means closeness to reality. That's really a double-edged sword in the metaverse. And when that assault in 2016 happened, Dr. Franks said people were minimizing it.

Dr. Mary Anne Franks:

And there was this very strange response of we care very much about the verisimilitude when it's positive things, right? Oh, it really feels like you're falling off a cliff, or you're really putting on armor, or whatever the case may be. But when it's, oh, you were sexually assaulted and in virtual reality, well, that's not really real.

Rebecca Rand:

Again, I'm not surprised.

Justin Hendrix:

So a bunch of researchers at this conference kept coming back to the idea that sexual assault in the metaverse is real assault. And in our conversation later on, Brittan Heller agreed.

Brittan Heller:

I'm a former prosecutor and some of the people who've said they've been sexually assaulted in the metaverse when I talked to them afterwards, they actually showed the same type of behavioral patterns, speech patterns. And basically everything that I would look for in a physical sexual assault victim, is present when people have claimed to have been sexually assaulted in a metaverse related property. Yeah, it freaks me out a little bit.

Rebecca Rand:

That's really interesting and also pretty scary. I'm not sure that people who enter the metaverse really understand the ways that they're potentially going to be exposed to violence or threats by bad actors. That's something that really isn't addressed by tech companies who are operating these platforms.

Justin Hendrix:

Not nearly enough. Dr. Franks for her part, she didn't have a ton of faith that the tech industry would be proactive about preventing violence in the metaverse as they're building these worlds.

Dr. Mary Anne Franks:

That tendency to design structures for people who have always had power, who have always been able to navigate these spaces with their interests already in mind, we're not taking the time to think about all the ways historically that we have seen new spaces, new products, new ideas are always used against those who are most vulnerable. And in particular the kinds of harms that will mean for women and people at the intersection of multiple forms of discrimination. Recklessness as a general kind of design concept, that would be the really broad based concern I have.

Rebecca Rand:

'Recklessness as a design concept.' Wow. What was it Zuckerberg used to say? Move fast and break things.

Justin Hendrix:

That's sort of the ethos that a lot of Silicon Valley still operates under, even as we've seen the devastating impacts that social media can have on our society or democracy.

Rebecca Rand:

I think I'm finally starting to understand why Brittan Heller wanted to get a bunch of smart policy people in the same room talking about how to regulate extended reality. Do they have any ideas about how to hold these platforms accountable for the bad stuff that might take place in their virtual world?

Justin Hendrix:

That's a good question, and this is what Brittan said.

Brittan Heller:

Before I look for questions of accountability, I look for questions relating to effective rules and enforcement schemes.

Rebecca Rand:

So where would the law even start trying to govern what people do in virtual worlds?

Justin Hendrix:

Well, there's no simple answer to that. If there was, I don't think we have had to have this conference, but people had lots of different ideas about which laws could govern what crimes.

Rebecca Rand:

I'm actually glad we're talking about sexualized violence because I feel like that represents the quagmire we're in. Even in the physical world, it's an assault both on someone's physical being and their mind, so that's already kind of straddling a physical and a nonphysical world.

Justin Hendrix:

That's an interesting point, and also came up during the conference. Some of the panelists like Jameson Spivack, he's from the Future of Privacy Forum. He didn't think society would see virtual assaults as real assaults.

Jameson Spivack:

A lot of the research that does exist shows that when someone is assaulted in VR that the effects are similar physiologically and mentally to when someone is assaulted in the physical world. But I don't think that the courts and the general public are likely to see assault and VR as analogous to physical world assault. They might see it as intentional infliction of emotional distress, but probably not assault.

Justin Hendrix:

So another panelist was Joseph Palmer from the Department of Justice. He thought one way around this was to categorize virtual crimes as computer crimes.

Joseph Palmer:

There's all these puzzles about, okay, you do something malicious to somebody, but it's in virtual reality. In the prosecution business, we just have to use the tools that are available to us, especially at the federal level. The Computer Fraud and Abuse Act is potentially a pretty powerful tool that prohibits basically doing something in the computer that's against the rules of the computer.

Justin Hendrix:

Still, other panelists like Eugene Volokh, who's a law professor at UCLA, he thought it was unlikely the Feds would want to get involved in people's online conduct in this way.

Eugene Volokh:

Let's say that I show up in some virtual environment with a naked avatar, let's say they could argue it's a violation of the Computer Fraud and Abuse Act. My question is how many federal prosecutors are going to actually prosecute that? My understanding is it's not easy to get federal government to prosecute unless there are missing millions, dead bodies or kilos of cocaine.

Rebecca Rand:

He was hilarious. He was so funny. His examples.

Justin Hendrix:

That guy's very smart.

Rebecca Rand:

He's very smart, but his examples were just off the wall.

Justin Hendrix:

Yeah. And another panelist, Amie Stepanovich, who's also from the Future of Privacy Forum, she thought that categorizing virtual misconduct as just computer crimes didn't live up to the seriousness of sexual violence.

Amie Stepanovich:

Where extending computer crimes is well beyond where computer crimes should be extended in many circumstances, and I'd love to see us rely on the actual underlying criminal law wherever we can.

Justin Hendrix:

She also pointed out that many victims would rather see people who violate them held accountable under actual sex crimes rather than computer crimes.

Amie Stepanovich:

I actually think there are important reasons when you're violated for a criminal to be pursued under certain crimes as opposed to being pursued under computer crimes. There are ways to recover from those violations that I think are bolstered by seeing the person responsible for those harms brought under certain types of prosecutions.

Rebecca Rand:

Like actual sexual assault charges.

Justin Hendrix:

Absolutely.

Rebecca Rand:

I totally see your point, but I feel a need to point out, we're already just terrible at addressing sex crimes in the real world. Oftentimes police don't investigate it, prosecutors don't prosecute it and juries don't convict it.

Justin Hendrix:

Yeah. Professor Volokh also made that point.

Eugene Volokh:

I'm all in favor of throwing the book at people. At the same time, we have to understand realistically the limits of enforcement in this kind of situation. It's not like we're greater than enforcing sex crimes law as it is. Now if we want to punish the harm, I'm all for that as a matter of moral principle, I just think as a practical matter will be often so difficult that this is an area we should be investing a tremendous amount of prevention rather than relying on after the fact punishment and whatever deterrence it might bring. The really important question is how can we figure out really good security systems?

Justin Hendrix:

Then someone in the audience suggested programming a swift kick to the groin into the haptics of the XR setup. But even that has its limitations.

Eugene Volokh:

It will be a very quick response. I love the idea of a swift kick in the groin of the groper, or the masturbator, or whatever else, but they're not idiots, right? If they have haptics on their VR setup, they will presumably have the haptics set up to disable the swift kicks in the groin.

Rebecca Rand:

As funny as that maybe as an idea, I also want to point out that it does put the onus on the person being violated to enforce the groper's behavior. What's to keep the platform from collecting data and saying, "Oh, we noticed that avatars that look like women get groped. So guess what? No more female avatars allowed. Look, we're preventing crime." We've talked about tech companies don't exactly have the greatest track record on making online spaces safe for women and sexual minorities.

Justin Hendrix:

Yeah. Dr. Franks talked about that earlier with how Meta said the person who got groped on their platform should have activated the shield rather than putting in the work to make that a default.

Rebecca Rand:

I also feel like there's this argument that gets made to women who get abused online like, "Just stay off the internet." As though the online harms don't bleed into the real world, and just not looking at it will protect you. Like, "Just take the headset off. It's over."

Justin Hendrix:

Absolutely. One of the things that I teach my students about is Gamergate several years ago. And one of the women being targeted in that harassment campaign, Zoë Quinn, she was getting rape threats, death threats, doxxed. She went to court trying to get it to stop, and a judge told her just to stay offline. I've heard the same thing from other women that I've talked to. When you say that, you're basically telling people who are targeted to remove themselves from a forum for political expression and participation, that has huge implications for their ability to participate in democracy.

Rebecca Rand:

Definitely, and I feel like there's a bunch of laws that enshrine people's right to exist in certain spaces safely, free of harassment and targeted hostility like the Civil Rights Act, protecting people in the workplace and at the polling place or Title IX at school, or the Americans with Disabilities Act. There's this idea that harassment can limit people's ability to participate in society, in democracy, and more and more the virtual space is that kind of venue.

Justin Hendrix:

You're getting to one of the core things we work on in the tech policy world, which is how do we make spaces safe and preserve expression for as many people as possible? There's a real tension there.

Rebecca Rand:

Are there any governments or agencies taking steps to start regulating the metaverse?

Justin Hendrix:

So there was this back and forth between two panelists that sums it up nicely. Susan Aaronson who's a professor of international affairs at the George Washington University, was talking about regulating technology.

Susan Aaronson:

I was listening. I don't really want to use the word regulate, I want to use the word govern. When we talk about regulation, I don't think anybody knows how to regulate, whether it's data, platforms, companies, et cetera, and that is a problem when we talk about governance. Governance tries to be technologically neutral. And I think in some respect, that makes perfect sense, because you don't want to squash innovation.

Justin Hendrix:

She's talking about a kind of American perspective that's very pro innovation. This other panelist, Florence G'Sell, who's a French law professor from the University of Lorraine, showed us how it's different in Europe.

Florence G'sell:

As I am the French scholar in the room and the European scholar in the room, I have to say that I will speak about regulation because regulating is something that we are very good at. Over the past few months, I have heard that we should probably adopt a metaverse act. Since we have so many acts already, it's just one more act. We need to look at the user experience and we need to see how fundamental rights are affected. This is what we'll be doing in Europe regarding extended reality. We will look at privacy issues. We will look at what we call human dignity, family life, the rights of the child, and we will try to regulate on that basis since we definitely need to regulate.

Justin Hendrix:

Back to Brittan and Heller, the woman who organized this conference. She also highlighted that the US is not going to be the first place to look for regulation to happen.

Brittan Heller:

I think intergovernmental organizations are actually the place that I look for the start of this. Interpol announced for its 100th anniversary that it was going to create a metaverse like presence for international law enforcement.

Rebecca Rand:

Wait, is this like there's going to be virtual cops in the metaverse?

Justin Hendrix:

This stuff is totally in its infancy and it's mostly about getting international law enforcement familiar with the metaverse, even just as a training platform. But they're working with the World Economic Forum, Meta and Microsoft on an initiative to define and govern the metaverse.

Rebecca Rand:

Define and govern. I guess I have a question here about involving tech companies in developing policy.

Justin Hendrix:

Okay.

Rebecca Rand:

Well, I guess as a journalist, it just makes my skepticism alarms go off. Organizations like this are partnering with industry to decide how to regulate them. I recognize that many tech companies have smart people who are doing important work on how not to destroy the world. But these are for-profit companies whose bottom line is to grow and to please their shareholders. And some of them have, well, not the best track record of acting in the public interest.

We've seen how industry has captured so many regulatory agencies in the US, the FDA, the EPA, and some would argue that these public private partnerships have weakened regulation, not strengthened it. Even at this conference at Stanford, it was a mix of academics, public sector people, but there're also folks from OpenAI and Meta speaking. And no shade on them, they seem like really smart people who want to do the right thing.

But can you just explain to me why these companies get to have a seat at the table when discussing policy? What's their role?

Justin Hendrix:

This is a can of worms and it would probably take us another podcast to solve these questions. But I'll just say that you're right that there are many of the folks working in policy roles and in trust and safety organizations inside these companies who absolutely want to do the right thing. And they are committed to advancing the thinking and the practice of trying to make people safe and trying to increase participation in digital environments. Oftentimes as we know, as you've pointed out in your question, the reality is that their interests run up against the corporate interest or the interest in some cases of their billionaire owners, and sometimes they don't get what they want.

But on the other hand, I think it is worthwhile to have them at the table in the sense that they bring a kind of specificity and often nuance to some of the technical limitations and organizational limitations that they face when they're trying to solve these problems.

One of the people in the room you're talking about is this guy called Joe Jerome. So Joe, I know very well. And he came out of civil society, out of an activist orientation, took a job at Meta, surprised a lot of people. But he's a privacy fanatic lawyer and he wanted to go inside there. He just got laid off like two weeks ago. They're also the first to go often. So the good people who come in from the outside, and no doubt they get paid a lot of money. But they're often the first to go as well. And actually, Professor G'sell pointed out how the EU recognizes the problem you're bringing up and is doing things differently than the US.

Florence G'sell:

From a European perspective, we are very much concerned about the overwhelming power of big tech. It is now very common in Europe to highlight the fact that those big, often American companies are super powerful and as powerful as states. Who do we want to regulate? From a European perspective, it would probably be those big actors that we don't want to be too powerful. In those extended reality environments, we don't want to see those big centralized platforms decide how our data can be collected.

Rebecca Rand:

Yeah. Let's talk a bit about big data and privacy. Are there specific data privacy issues that come up with extended reality?

Justin Hendrix:

There are many, many, many, many, many privacy concerns that come up with XR. You can think of it like this. When you're on your phone or computer, you're already generating lots of data about yourself that companies can keep track of and use for their own benefit. Like what you look up, who your friends are, what things you click on. When it's XR, there's even more data to collect.

Remember Jameson from the Future of Privacy Forum. He was talking earlier about how judges or the public would view sexual assault, but his big thing is actually big data. Here's what he said.

Jameson Spivack:

XR relies on large volumes and varieties of data, sensor data about our bodies, about our environments, data from our devices about what we're doing on these devices, our precise and approximate geolocation data. And altogether this data provides a really intimate view of our interests, our behaviors, our medical, physical or mental health conditions. And in the wrong hands, this could be used to make discriminatory decisions or other harmful decisions about people.

Justin Hendrix:

And I should say he's written about these ideas in Tech Policy Press as well. What he pointed out here is that there isn't really any comprehensive federal privacy law. There are laws that protect specific types of information like HIPAA for healthcare or FERPA for education. But right now once we agree to the terms of service, when we use an XR application, we generally give the company a free pass to collect data on us and use it as they wish. And actually eye tracking data is one of the big ones that could be really sensitive.

Rebecca Rand:

Eye tracking data. How is that?

Justin Hendrix:

So a lot of headsets now include eye tracking technology using a camera to track people's eye movements ostensibly to improve the experience for the user, helps with rendering and also makes it clear on their avatar what they're looking at. The new Vision Pro from Apple and the Meta Quest headsets both use eye tracking. But this one guy at the conference, Avi Bar-Zeev, he's an XR designer and entrepreneur. He gave a presentation about the numerous intimate details tech companies could extract just from eye tracking data.

Avi Bar-Zeev:

Some of the types of things that we can get from eye tracking data. What are you thinking about? What are you looking at? Are emotional responses, your pupil dilation.

Justin Hendrix:

I'll just jump in here to articulate. He had this slide listing some of the things that could be inferred from eye tracking data. You can gauge someone's interest in a thing by tracking what they're looking at, like the direction their eyes are pointed, how long they're looking at it. The thing he was saying about pupil dilation that can reveal things about how excited or motivated people are. And there's a lot of really private things that eye data can reveal about a person.

Avi Bar-Zeev:

The fact that eye tracking data has already been used to detect things like autism of MS, Alzheimer's, ALS, schizophrenia, and used effectively.

Rebecca Rand:

That's pretty wild, though I assume that's not what the VR platforms are really interested in.

Justin Hendrix:

Most data comes down to ads and money. Here's what Avi Bar-Zeev said.

Avi Bar-Zeev:

The advertisers are salivating over this, right? It used to be we'd collect the data on people, we'd improve the models, we'd use the data, that happened over years. But now we're looking at a world in which it can be done in real time per individual. And that's where it gets scary. Because when it's used for per individual, we can figure out what your triggers are, how to manipulate you to get emotional, to get defensive, to get how into whatever state we want you to be in. And that's kind of dangerous.

Justin Hendrix:

And another thing he brought up is that raw eye tracking data is not anonymous because you can easily identify people by matching unique features on their irises or retinas.

Rebecca Rand:

Yikes. This eye tracking data thing sounds like a huge vulnerability.

Justin Hendrix:

I will just point out, it's interesting that Apple's approach to eye tracking data actually does appear to be built on some privacy by design principles. So there are limitations they're putting on what access to that information third party developers would have.

Rebecca Rand:

That tracks with how Apple generally does with their user technology.

Justin Hendrix:

Absolutely. There's this other guy who spoke, Kent Bye. He has a podcast by the way, which all you listeners should definitely check out. It's called Voices of VR, and it's a great look into the future of extended reality. Anyway, Kent gave us a look into the future of what kinds of sensor data these extended reality setups might be using.

Kent Bye:

Moving on to the biometric data from XR and neuro-rights.

Rebecca Rand:

Wait, what did he say? Neuro-rights.

Justin Hendrix:

Kent's is a really smart guy and he's a bit of a fast talker. He was given the difficult task of smushing that entire presentation in just a few minutes. He said neuro-rights, like neuro the brain.

Rebecca Rand:

Got it. Okay. Back to you, Kent.

Kent Bye:

And neuro-rights. This is where I think the rubber hits the road in terms of what keeps me up at night in terms of the implications of XR. So I went to the Future of Neuroscience and VR conference by the Canadian Institute for Advanced Research in 2019, and they had a neuroscience there. It was showing this latest research that was implanting these different nodes into the brain. And so you have people who were just thinking a thought and from that thought, you're able to use AI to do the speech synthesis. So it's able to basically translate your thoughts into words. It's essentially mind reading technology. Meta was showing the early prototypes of what if your brainyou'd think and it could type. So this idea of this brain computer interface, they eventually stopped that in 2021. But they were working on these invasive neurotechnologies.

Rebecca Rand:

Invasive neurotechnologies. Oh my God. You see if I go around and start telling people about this like, "Facebook wants to implant nodes in your brain and read your thoughts." People are going to think I'm insane, like tinfoil hat type stuff.

Justin Hendrix:

Well, they did purchase a company that had the ominous name CTRL-Labs, which does neural sensors.

Rebecca Rand:

Goodness gracious.

Justin Hendrix:

Welcome to the world of tech policy. Anyway, turns out you don't even have to implant anything. Kent talked about a very futuristic headset that could take your temperature, measure your brain waves, your nerve impulses, and now with powerful AI trained on massive data sets, could infer what people are thinking or thinking of doing with much less invasive sensors. Here's Kent.

Kent Bye:

Well, back in 2021 there was a paper and a poster that was from Meta that was showing how you could just take hand tracking data and combine it with head post data and be able to extrapolate through AI the eye gaze data. So if you think about all those biometric inferences from the eye gaze data, now if you just have hand poses and head pose, you have eye gaze data, which is fusing all this thing together. So the idea is that you have all these different biometric inferences and that you may think that it's pretty innocuous to have just body movements. But those body movements could be referring to all these other aspects of mental privacy.

Rebecca Rand:

Wow. So you don't even need to track people's eyes anymore to tell where they're looking.

Justin Hendrix:

That's right, and there are other sensors that can even predict our behavior, not just track it.

Kent Bye:

So Meta has an EMG sensor that you put on your wrist and it's able to isolate down to an individual motor neuron. What that means is that you can just think about moving and that thought about moving, the intention to move can actually trigger a movement within a virtual context.

Justin Hendrix:

So I will say as someone who had the opportunity to visit CTRL-Labs offices before it was sold to Meta, the extent to which it was able to read a signal from your neural system through your wrist, and allow you to control an interface on a screen, or a robot, or an augmented reality experience, it was one of the most impressive and perhaps terrifying technologies I ever had the chance to see demonstrated.

Kent Bye:

So what does it mean for a company to be able to detect intentions to move? What are the human rights frameworks to be able to have an idea for how to put some guardrails into some of this neuro turf?

Rebecca Rand:

Yeah, let's talk about guardrails. Because again, companies like Meta do not have the best track record when it comes to protecting their user's privacy.

Justin Hendrix:

That's right. And there were representatives from Meta on some of these panels who seemed very open to some of these guardrails like Joe Jerome.

Rebecca Rand:

That's great, but part of me wonders how much influence people like Joe Jerome have on how this tech gets developed.

Justin Hendrix:

That's something that Dr. Franks who we heard at the beginning talking about sexual assault in the metaverse. That's something she brought up. She brought it back to your earlier point about how these tech companies are motivated chiefly by making money, and your data is a valuable asset they can use or sell to others. She took issue with Meta's Ego4D project, which basically takes data from first person videos, like that might come from a pair of augmented reality glasses and records and interprets what a person is perceiving. Dr. Franks pointed out that when it comes to privacy, technology companies generally ask for forgiveness rather than permission.

Dr. Mary Anne Franks:

What Meta is doing with Ego4D, which is this constant kind of recording of the people around you, as soon as they launch the project, the reporter asks you, "What are the privacy controls that you're going to have?" And Meta says, "Ah, that's going to happen down the line." So this is exactly not the right way to do it. And it does, I think really underscore how you can't let the people who are profiting from this be the ones in charge. That's the worst possible approach we could have.

It'd be wonderful if there were some guidelines that responsible companies wanted to take up, but it shouldn't be optional. So I think again, the entire approach of just saying, "Oh, well, we already have this thing. Now what do we do to maybe mitigate some of the harms that have become public?" That's completely backwards. These products, these services should be subjected to extensive testing to ensure that they're not being rolled out to the public in ways that are just fundamentally unsafe. There needs to be some kind of preemptive way to get at some of these problems and not leave it up to either the corporations that really want to make money from these products, or from individual users who have either opted in or not opted in, that can't possibly be the solution. There has to be something stronger, more robust, and more meaningful than that.

Rebecca Rand:

Man, I'm with her. It seems naive to try to trust these companies to regulate themselves.

Justin Hendrix:

I hear you, and I guess that's one of the things I'm trying to do with this podcast with Tech Policy Press. It's to get folks concerned about policy, interested in this stuff and see that we really do need policy to step in here. Because as human behavior gets sucked more and more into the machine, like it or not, what protections will we have from each other, from the companies that exploit us in these environments or even from governments who can seize this data and surveil us?

Rebecca Rand:

Well, I guess I got to go out and buy some tinfoil, protect these brainwaves.

Justin Hendrix:

I suspect that is always a good idea.

Rebecca Rand:

Well, this has been the Sunday Show from Tech Policy Press, not sponsored by aluminum foil companies. Thank you so much, Justin, for guiding us through all that.

Justin Hendrix:

Thank you and you're very welcome, Rebecca.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...
Rebecca Rand
Rebecca Rand is a journalist and audio producer. She's pursuing her Master's degree at CUNY's Craig Newmark Graduate School of Journalism. In summer 2023, she is an audio and reporting intern at Tech Policy Press.

Topics