Home

A Design Code for Big Tech

Justin Hendrix / Oct 29, 2023

Audio of this conversation is available via your favorite podcast service.

Today’s guest is Ravi Iyer, a data scientist and moral psychologist at the Psychology of Technology Institute, which is a project of the University of Southern California Marshall School’s Neely Center for Ethical Leadership and Decision Making and the University of California-Berkeley’s Haas School of Business. He is also a former Facebook executive; at the company he worked on a variety of civic integrity issues.

The Neely Center has developed a design code that seeks to address a number of concerns about the harms of social media, including issues related to child online safety. It is endorsed by individuals and organizations ranging from academics at NYU and USC to the Tech Justice Law Project and New Public, as well as technologists that have worked at platforms such as Twitter, Facebook, and Google.

I spoke to Iyer about the details of the proposed code, and in particular how they relate to the debate over child online safety.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Ravi, what does the Center for Ethical Leadership and Decision Making get up to?

Ravi Iyer:

So we focus on ethics and technology. That's part because we think technology is an important force in the world that has powerful shaping effects on our behavior and has powerful effects on both our individual well being and our societal well being. So we think that's it's not the only way we could have focused on, but it's the thing that we think is the best lever to focus on at this moment.

And so I was lucky enough, a professor I used to work with back in academia -- when I was a psychologist by trade -- when I was leaving Meta, he happened to become the director of this center. And so he brought me on board to help move their work forward.

Justin Hendrix:

Some of my listeners may be familiar with you, and your work you've published in Tech Policy Press, which I'm grateful for. For those who might not know you, say a word or two about your career at Meta and what you got up to there?

Ravi Iyer:

Yeah. I started my time at Meta as a data science manager in News Feed Integrity. And I'd done a lot of work on polarization. I have an academic degree in social psychology, and I've done some work with Jonathan Haidt on polarization.

I also helped a friend of mine start a company called Ranker, and so I had this dual tech career and career studying polarization. And somebody said, well, you go work at Facebook, you can work on polarization and you can impact a lot of people. So I did that. That's great. About six months into my time there, there's a sort of a famous Wall Street Journal article about them deprecating their efforts on polarization.

So I moved my family up here. I was like, what am I going to do? I came here to work on polarization, but I figured out ways to work on polarization without necessarily working on polarization. There were still avenues, places where the company was willing to lean in a bit more, things like elections, things like at risk countries.

I actually moved from being a data science manager to being a research manager where I could ask users if their definitions of hate speech matched our definitions of hate speech. Often they didn't. And then I eventually realized that the only way to attack these problems more broadly was to move from the integrity efforts to more of the core product team. So I ended up my time there working on the core newsfeed team, just trying to figure out how to attack these problems upstream from a design angle, as opposed to downstream from like cleaning up after whatever the other teams were doing angles.

So that informs a lot of my time since leaving Facebook, where I try to bring that experience and message to the wider world because I think a lot of things we did at Facebook could be done more aggressively. They could apply to TikTok, they could apply to YouTube. So I'm hopeful that by applying those lessons I can have even more impact.

Justin Hendrix:

So one of the ways that I think of you in my mind is through a phrase that you used in a post that you published not terribly long, I think, after you got into this new line of work you're in at the moment, which is that "content moderation is a dead end."

So for a lot of folks listening to this who are on trust and safety type of work, or perhaps they've done a lot of work on disinformation or what have you, that type of phrase might be fighting words. Why do you say content moderation is a dead end? And how is that this jumping off point for your work?

Ravi Iyer:

So I don't mean to say that content moderation is valueless. I chose the phrase 'dead end' intentionally. And I think honestly, a lot of people, I worked alongside many people who work in trust and safety integrity teams. And a lot of those people agree with me. You work on these problems, you wake up the next day, you're working on the same problem.

You're wondering, can I fix this problem at a more root level as opposed to fixing this problem with a moderation based approach? So I chose that end for a reason. I chose it to indicate that it's not going to get us to the place that we want to get, not that content moderation and people who do moderation don't do important and good work.

But I often refer to Maria Ressa's analogy of taking a dirty stream, and you scoop a glass of water out of that stream and you clean that glass of water, and you dump it back in the stream. And I'm not saying that it's not important to clean a glass of water, because maybe you were going to drink that glass of water.

So it's important to improve people's experiences, certainly, but you need to address the upstream design of those systems. What is polluting that stream in the first place, or else you're going to wake up every day trying to clean more and more glasses of water. And in a world of generative AI, that's going to get even worse.

So I chose content moderation as a dead end intentionally to help move people from, in my opinion, a place where they were to a place where they needed to get to. But it was not meant to say that content moderation doesn't do some good important work.

Justin Hendrix:

So we are going to talk about this topic a little bit specifically around child online safety. You've just In the last couple of months had some thinking that you've put out into the world, particularly around responses to the Kids Online Safety Act. Of course, there's other legislation across the country that's been, in some cases, put into law around child online safety, including the California Age Appropriate Design Code and some of the perhaps more restrictive and different laws that have gone in places like Arkansas, Utah, et cetera.

Talk to me about this work, this idea of improving the duty of care. KOSA has this duty of care. That's one of the main sticking points amongst advocates who look at these things. What are you working on here?

Ravi Iyer:

So, I think that the people behind these efforts are absolutely in the right place with their intentions. I think everyone would agree that platforms should act in the best interest of minors. I do think that when you are not more specific about exactly what you mean, then it leaves room for both people to argue that you're actually... trying to legislate what people can and can't say, which is unconstitutional, or that you're trying to, or at least room for people to abuse those efforts, right?

So some people may say a duty of care means that you can't see content from LGBTQ+ members of the community because that is harmful to youth. So people can, can, twist the duty of care in different ways. People can think you mean things that you don't. And so I think there's value into being specific.

We recently came up with our design code effort. We intentionally try not to prescribe exactly how it gets implemented. So platforms could do some of these things voluntarily. There are ways that. App stores could play a role in these efforts. And, but I think the important thing is to be specific about what we want.

And so whether it's something that gets implemented as part of a duty of care, which is something I support I just think it can be improved upon by being more specific versus a more vague duty of care is just prone to abuse and prone to legal challenge.

Justin Hendrix:

Let's talk about the duty of care that's presently in KOSA for just a minute.

So you mentioned this idea of requiring a platform to act in the best interests of the user. That's literally point (a )under section 3 of KOSA's duty of care: "Covered platform shall act in the best interest of a user that the platform knows or reasonably should know as a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate the following..." And then, of course, it goes through lots of bad things that might happen to you, whether it's addiction, like behaviors, physical violence, online bullying, deceptive marketing practices, financial harm, et cetera.

But it's interesting I guess, in this. It's one of the places where I think maybe critics get hung up, there's some language, certainly in the limitation, that begins to suggest that the intent here is about limiting content, about content moderation. There is an exception, of course, that says that nothing in this, in the duty of care. Shall be construed to require a platform to prevent or preclude any minor from deliberately and independently searching for, or specifically requesting content.

It's almost like just baked into that is the sort of presumption that of course, as a result of complying with KOSA, the platforms are going to remove content from people's feeds. That's part of the goal here.

Ravi Iyer:

Yeah, I think it's somewhat unclear when you read it, right? And I think part of it is a goal to have your cake and eat it too, where you want to address the design of the platforms, but you also want to make a nod to the content that you're attempting to address.

And so I do think that there is room I don't, I, again, applaud the efforts. I think all bills move the conversation forward. I'm not here to criticize. No bill is perfect. But I would say that in the interest of just moving things forward, yes, I do think that a more specific focus on design would be a better bill. And as the bill evolves, I'm hopeful that it evolves in that direction.

Justin Hendrix:

So let's talk about your prescriptions. What is in the design code that you've put out?

Ravi Iyer:

So the design code, it reflects a lot of the best practices that I experienced in my time at Meta, like, the things that were most effective in crisis settings, in elections, people would often say things like why don't we have these things all the time?

People on the outside would say, look at these break the glass measures that platforms would do, and they'd be like I don't understand why you would ever turn them off. And when I left Meta, I would talk to people at other companies, and many other companies would do some similar things.

So the things in the design code are meant to be things that a broad set of people in society would agree with that hopefully transcend political divisions. And so there are things like don't optimize for engagement, instead optimize for quality, especially for important conversations. There are things like a lot allow users to tell you things that they don't want to see.

Or do you want to see even when it's contradicted by their engagement signals? Because there's lots of things that I will engage with that I actually don't want. So there are things like rate limiting accounts so that people small groups of people can't dominate an information space.

There are things like privacy defaults. It talks a little bit about how some of these things are really important, especially for minors. And then it talks at the end, it talks about here's some things that. I've learned or people have learned through product experimentation, understanding two different versions of a product, what actually leads to a better experience, both for outcomes of interest to democracy, but also outcomes of interest to children's wellbeing.

And it talks about how we've learned those lessons through product experimentation. We need to have product experimentation results going forward. If we're going to think about the next set of design codes as well.

Justin Hendrix:

So there are nine basic principles in this design code. The details of which of course you can find in the show notes to this podcast.

Let's just talk a little bit about the process that you came to. You've mentioned various experts were involved in. Putting this together how did some of the existing legislation that's out there, maybe the children's code in the UK or the age appropriate design code in California how did it stack up against some of the ideas you have here? How is it the same? How is it different?

Ravi Iyer:

So, I think it was informed by many of the successes of that legislation. So, in the UK, as a result of the age appropriate design code, things like infinite scroll, autoplay, those things are not are less prominent in many products that, that children interact with.

Those are successes that we wanted to enshrine and they're also reflect things that were done at platforms to make things more safe in response to elections. So a lot of the things you do to help kids are also things that you would do to prevent harm in a election setting. The things you would do to protect the journalists are oftentimes the same things you do to protect kids from being harassed by large groups of people.

So yeah, it definitely built upon some of those successes there. It also tried to address some of the criticisms that you've seen online, where people have talked about the ways that people estimate age can be a privacy concern, right? Nobody, I think, wants people to be checking IDs to access content online.

And so I think through many conversations trying to come to things that many people could agree about. Nobody really cares if just Ravi says, like, these are things we should do. Like, I think we all agree that there's a problem in the world. We'd all like to fix it. And like, what is a solution that we can all get behind?

And so we did, we talked to many people and the lowest common denominator for for protecting kids was not age estimation, but it was device based control. So effectively, if I buy my kid a, a child's phone, people don't need to know apps that they're interacting with that phone don't need to know anything about the kid other than that they're using a kid's phone, right? And so there's no change in the user experience for adults. There is no additional data being collected for kids. And I think there are very well meaning people who might want to go further than that. And that's a conversation that people can have, but this is at least a step that many people could get behind that doesn't have some of the same issues that more aggressive steps have had.

Justin Hendrix:

Yeah. I guess there is some legislation out there that. Seems like it's more aimed at maximizing parental control than it is that really necessarily maximizing child safety, or at least that's the sense you have when you look at some of the bills and places like Arkansas or Utah.

Ravi Iyer:

Yeah I think there's well meaning people debating these issues. I think there's still a lower hanging fruit we haven't had a lot of people from free speech groups sign the code yet, but we've had input from a lot of them and they were able to be much more okay with the idea that you know, a parent buys a child phone and it has this functionality than anything about that would ban or restrict content globally for anyone who's under a certain age.

Justin Hendrix:

So that's core to it. It's actually almost a device level intervention here, you're saying?

Ravi Iyer:

Yeah. And it doesn't say, for example, like what age someone should have, or it's like up to the parent. Like you will, a parent buys a kid a phone, the phone indicates that is a child's phone. When the parent decides that the kid is ready for an adult phone, they can buy the kid an adult phone. And it gets away from I've taken hard questions from. Legislators about how do you estimate age and There are no good answers there, right?

If I'm being honest as a technologist, like, I have to admit that we're going to make mistakes. I'm going to ask I'm going to treat you like a child sometimes as an adult. I'm going to treat your child like an adult sometimes because it's an estimate. It's going to, it's going to be imperfect.

I think just having that certainty just gives people a lot more comfort. Remove some of the creepiness factor. And I think it's something that many people can get behind. I'm not saying that there aren't better solutions that are more maximal out there. I think it's just a low hanging fruit that many, that a large number of people would get behind.

So the design code, everyone who signed the design code wants more they have more things that they want. So it's not meant to be the thing that everyone wants. And that will the silver bullet to fix the ecosystem. There's a set of common sense things that would fix a lot of the problems of the ecosystem that a broad number of people could get behind that are based on best practices from a large amount of product work across companies.

Justin Hendrix:

I want to ask you about Google's new proposal. So Google has also more recently put out what it calls a legislative framework to protect children and teens online. this is a relatively. Basic document. There's sort of five pages of goals that Google-- the YouTube logo is here as well-- put forward. Do you see any similarity in what you've done and what Google's put out? Do you think that the company is learning from some of the different types of proposals that civil society and groups like yours have put forward?

Ravi Iyer:

Google or YouTube has done very similar work to the things that I did at Meta. So like, there are published best practices that Google has done where they've realized that optimizing for engagement is not the best thing to do in all cases. And they do lots of user surveys to try to optimize for quality.

Now, like Facebook, I doubt that they did it as aggressively as. Is possible. I'm sure there's more to be done and I so I, I don't have the benefit of having that document in front of me, but a lot of proposals by companies tend to be more vague and I think a lot of times the the devil's in the details, right?

So it's like okay we shouldn't optimize for engagement. We should optimize for quality. But what exactly do you mean by engagement? What do you actually, exactly mean by quality? We try in our document to spell out what we mean by all of these terms. And some of these things are honestly, There are active areas of of research in the world, right?

Like, so there is no established best practice for how do I measure user perceptions of quality, for example. But there are a lot of people at YouTube at Metta at various companies, trying to figure out how that we get at what is quality content, not just content that will keep me engaged. And there are people who realize that would lead to better outcomes, both for our democracy and for our children. And so I've never seen any one of those efforts not produce a positive outcome. Now, there's different ways to do it that can produce more positive and less positive outcomes, and we need to iterate on that. But all of those efforts will be an improvement.

So I think as, as far as like proposals from companies, I think the devil's in the details and it's important for us as a society to have specifics so that we can hold them to account. Because otherwise, if it's too vague, it leaves too many degrees of freedom for companies to, to somewhat conform, but not necessarily do it in a way that really materially improves the user experience.

So I'm hopeful that by making something that comes from outside the company, we can put a little bit more teeth into it, and hopefully make something that can hold them to account.

Justin Hendrix:

One of the things you call for, which you've mentioned, is the idea that, you know, when product developers make changes to a product and are making design decisions, particularly when it comes to children, that they should be able to publicly provide experimentation results. They should be able to perhaps provide access for researchers to be able to scrutinize. Whether those were good decisions that they made or not I, of course, have spent a lot of time on this podcast talking about researcher access and talking about need for transparency, that sort of thing.

Clearly, we need that. What do you say to folks, though, who look at the available evidence and Disregard the harm. They don't think necessarily the case has been made. There are strong connections between social media use and problems in mental health, particularly with children.

Ravi Iyer:

Yeah, I think there's, so there's a way that people take a maximal view of social media as, I'm not sure, it's a complex system. I'm not necessarily willing to be as strong about that as some of my colleagues, but I actually don't think it matters. I think it's very clear that social media, that technology has negative effects for a large number of youth. And there was a, there was some things in the Facebook papers where like one stat was like 30 percent of people teens with body image issues are harmed by Instagram.

And then other people would say 35 percent were helped, right? So if you, on average, maybe maybe there's a null effect, maybe there's a slight positive effect. But the 30 percent that were harmed, aren't really affected by the 35 percent that are helped. It doesn't really matter to the 30 percent that were harmed that there's this other larger group that were helped.

The point is that there's a large group of people who are harmed by some of these products, and those effects could be made better. And I think that's undeniable. And it's as simple as things like There are children who are using their platforms and the platforms are optimized for time spent and so they sleep less and sleep is uncontroversially related to mental health outcomes.

So there's a lot of these like uncontroversial things that are built into these products. No one would say that bullying and harassment is not related to mental health. right? No one will make that argument. So if there are parts of the product that lead if there are design decisions that platforms are making that lead to more experiences of bullying and harassment amongst youth, it doesn't really matter if it's causing the entire mental health crisis.

If they're making design decisions that lead to more bullying harassment, those decisions should be unmade.

Justin Hendrix:

It sort of seems like to me that part of what we're saying with these design codes for children is we want to strip out the hyper-capitalist motivations that platforms have generally and protect children from that, protect them from the profit maximization, the attention maximization almost like you would rather see if children are going to engage with these things that it's done perhaps not at a loss, but certainly Not with the goal of growing the business.

Ravi Iyer:

I think I actually might be slightly more conservative than that. So the analogy I sometimes use is to building codes. And so we don't hold builders responsible for every bad thing that happens in building. And I don't think we should hold social media companies responsible for every bad thing that happens online.

But if a builder designs a building with flammable materials, we hold them responsible. And so the analogy there does not. Disallow builders from turning a profit from building great buildings from competing with other buildings to be the best builder possible It just sets a set of minimum standards So there isn't a race to the bottom or builders are not out competing other builders by building things with cheaper materials and so things like engagement optimization for kids in my opinion are the equivalent of cheaper materials They're cheap ways to out compete competitors.

I think that many executives, if you ask them if they were truly honest would say, no, they don't want to be doing that. Like if they could get everyone to agree that this is a bad idea, they would all agree that this is a bad idea and they'd all rather compete in some other way. So it's just trying to create a set of minimum standards.

It's not saying that platforms can't try to be profitable or out compete other platforms. It's just trying to say, let's not have a race to the bottom on the backs of some of our most vulnerable users.

Justin Hendrix:

Have you had any discussions with lawmakers about this code?

Ravi Iyer:

So we just released it recently, so we haven't had time to socialize it as much as we could. We've certainly talked to people who are influential in the space. Some of the people who talk uh, who've signed the code are Frances Haugen, Jon Haidt, Center for Humane Technology, New Public, many academics, Search for Common Ground. People who talk to government at various levels and talk to platforms at various levels.

And so are certainly involved in policy conversations, if not necessarily policymaking. The, part of the goal of the code was also to just not. talk about policy solutions, but also to talk about just, these are just best practices, right? And so people we've also talked with are people building smaller platforms startups, things that haven't even launched yet.

And just saying, like, here's some things you can do to differentiate yourself from the platform, other platforms. And so you might want to do these things voluntarily. And maybe they'll create better user experiences that will lead people to want to use your platform more than some of the incumbent players.

So there's a lot of different ways that we envision it being used, not just by policymakers but certainly, yes, policymakers are one big constituency that we're hoping if we put these ideas out there, it can give them some very specific ideas for how to not just say a broad principle that they want platforms to achieve, but also a more specific product choice that they want them to make.

Justin Hendrix:

You mentioned that you haven't had a lot of input yet from the free speech crowd, as it were I suppose maybe legal experts who study the First Amendment here or who think about free expression and various tests to look at the degree to which free expression is being upheld or otherwise impeded.

Is that part of your roadmap here? Do you intend to battle test this against the types of First Amendment complaints that seem to have bogged down prior legislative efforts, certainly around design codes for kids?

Ravi Iyer:

Yeah, 100%. So we haven't had a lot of people sign the code from those places, but we have taken input from people and it's just when you ask an organization or a person at such an organization to sign such a thing, it just is a higher bar.

So it just may take a while. We definitely hope to get people who are more free speech oriented to. Sign some of these things. That's one reason why we didn't say that these are things that we think are pure policy proposals. Like, these are things that we think are best practices. There are ways that governments could support this in the same way that governments put out nutrition standards and say like it's good if we all eat more vegetables.

And it's good for school lunches that we serve certain things like and that has an effect on like local schools, for example they try to follow those things. So I think there are ways that. government can enact some of these things in ways that free speech advocates could, they can, government can support these things in ways that free speech advocates could actually stomach just by saying like, these are good practices.

These are things that we think people should do. These are things we recommend. These are things that like private companies schools, places that, that want to follow best practice can enact. These are things that app stores could decide to do in the service of their users. These are things that new players in the field ought to do.

And so I don't think it has to be something that free speech advocates necessarily have a big problem with and that was a goal of this effort was really to create something that a broad group of people could go to companies and say like, look, these are best practices.

Justin Hendrix:

I'm speaking to you just a couple of days after three dozen attorneys general from across the United States sued Meta in particular, your former employer, around what they say, allegedly, it's lured kids onto Facebook onto Instagram that it's instituted various product changes that have put kids at risk. Did you read that lawsuit? Did you consider whether your design code might have protected the company from the type of claims that are being made against it today?

Ravi Iyer:

So I haven't read the lawsuit directly, but I'm familiar with many of the arguments in the lawsuit. And many of the arguments are not necessarily new. They focus on, yes, many of the things that are in the design code. And so things like not optimizing for engagement or asking users what they want instead of thinking that what users engage with is what they want.

Not having features like infinite scroll out of play. Those are the things that. They recur like people all observed the same problems. And so I'm not surprised that the lawsuit nods to many of those same concerns and that our design code happens to address some of those same concerns as well.

It's like um, building codes around the world all rhyme and it's not because anyone's coordinating. It's just because physics is the same all around the world. Like what, what creates a fire, what is a flammable material is the same around the world. So we're all observing that engagements.

Based optimization is has harmful effects and it's certainly not appropriate for our children. And so I think the lawsuit and our design codes are all just a function of that observation.

Justin Hendrix:

You mentioned the United Kingdom earlier, there's the age appropriate design code in the UK. A lot of the critics of age appropriate design codes in the US sort of paint a very dire picture of what will happen to the internet. In fact some ideas like this are put into law. Have you looked closely at the UK have there been positive effects? Of course, you mentioned already that certain changes have been made I don't know. It seems to me, I'm not hearing this sort of furor from the UK that the internet's been ruined over there.

Ravi Iyer:

Yeah. The effects that I've seen from the age appropriate design code, things like some platforms not doing those engagement maximization features have been positive. And I think part of the problem we have in the United States is we're so polarized that we can see people using a thing to achieve ends that are things that we wouldn't support in a way that you won't see in other places.

I don't know enough about UK politics to know for sure, but it seems unlikely that other places around the world are as polarized as we are, and that you have attorney generals who have, like, completely opposite views about the kinds of things that children should and should not see, and therefore they're going to use it for very, almost nakedly political ends to pursue that agenda.

It's almost a symptom my opinion, I did a lot of when I was at Meta, I remember reading Ben Sasse's book and it echoed a lot of things that I had been hearing where he was saying how politicians sometimes try to escape these cycles, but. They, at the end of the day, they often end up leaning into, like, whatever performs well on social media, which is, like, the most outrageous thing.

And so I think, in a world where that's true, I don't think Americans trust their Attorney Generals to necessarily do the right thing, because yeah of course there are many great attorney Generals as well, but I'm just across the country, like could one attorney general take a vague law and abuse it in a way that wouldn't happen in the uk?

Yeah, I think that's something that a lot of people are rightly concerned about, and so I think that's a reason why the age appropriate design code might work well in the UK and it may not work as well in the United States.

Justin Hendrix:

What are you working on next over there?

Ravi Iyer:

Well, we're continuing to push the design codes. We've gotten a lot of support already, but it's not a simple 20 word statement. It is a complex document. And so there's going to be a lot of hand to hand combat. So we're presenting it to groups of people. And we're willing to show where people are who are influential and interested in these codes and explain to them why.

They're beneficial and also take feedback on things that we can adjust our framing of. So we're just continuing to push on that. We also have probably the other interesting thing we're doing is a measurement effort. So the other best practices we had at company, a lot of design could focus on the user experience.

They're not framed in terms of policy definitions of what is or is not harmful. They're framed in user definitions of what is a good or a bad experience. A lot of the. Most important progress we made at meta was when we frame things in terms of the user experience. So we also have an effort to measure user experiences across platforms where we're trying to see is Elon Musk making Twitter better or worse?

Is TikTok actually Becoming more polarizing as U. S. China tensions how would we know? Right. And so we can rely on anecdote where people say like, yeah, my experience was good. My experience is bad. Or we can measure them more systematically. So we see that as a compliment or a design code.

Like we were hoping that we can get people to adopt these principles. And then we're hoping we can actually give them credit for it when they do good things. And then we can actually see in our measurement efforts that the user experience is getting better.

Justin Hendrix:

So if listeners are looking for this on the internet, of course, you can find it in the show notes. Thank you so much for telling us more about it.

Ravi Iyer:

My pleasure. Glad to be here.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics