Home

The FTC and Social Media Regulation

Justin Hendrix / Mar 24, 2022
The Federal Trade Commission, Washington DC.

The United States Congress has before it dozens of bills intended to rein in social media platforms such as Facebook, Instagram, YouTube, and Twitter. This raft of proposed legislation is in response to various harms that have come to light over the past few years, including dangers to democracy, harassment and hate speech, concerns over safety (especially for children), and various ways the platforms reinforce inequities and permit discrimination.

One agency in the federal government arguably has the power to take action on these issues with its current authority- the Federal Trade Commission, or FTC. But there are a variety of legislative proposals that would clarify the FTC’s role with regard to social media, and even provide it with substantial new resources to police the harms these massive companies produce.

The NYU Center for Business and Human Rights, with which I have collaborated in the past, last month produced a substantial report detailing principles and policy goals intended to clarify the debate in Congress and shape an agenda for the FTC, recommending that Congress direct the FTC to oversee the social media industry under the consumer protection authority that the agency already exercises in its regulation of other industries.

To learn more about the report and its recommendations, I spoke to Paul Barrett, the Center’s Deputy Director. Paul joined the Center in September 2017 after working for more than three decades as a journalist and author focusing on the intersection of business, law, and society.

What follows is a lightly edited transcript of our discussion. Audio of this conversation is available via your favorite podcast service.

Justin Hendrix:

Paul, you've just published with Lily Warnke, Enhancing the FTC's Consumer Protection Authority to Regulate Social Media Companies, a white paper on what more the FTC can do with regard to social media. Why did you publish this now?

Paul Barrett:

In connection with a series of papers that I've done over the last several years, we've made recommendations as to how the problems that we at the Center for Business and Human Rights have identified could be addressed and what the remedies might be. We started out talking almost exclusively about how the companies needed to regulate themselves more vigilantly. But over time, we came to realize that this just wasn't happening with sufficient vigor, that we as a country would need to have the federal government step up and provide oversight for the social media industry, for the first time.

It is anomalous that the social media industry does not have any kind of sustained regulation or oversight from the federal government. So, we began to introduce the idea that perhaps the Federal Trade Commission could do something, but we really hadn't fleshed that out.

We decided that it was really incumbent upon us to be a little bit more specific about the principles that lie behind this notion that the Federal Trade Commission, through its consumer protection authority, could provide useful oversight. So, we decided to do this paper and to release it at a time when many different proposals are being considered by Congress.

We hope that this paper provides a kind of guide to how to think about these proposals and perhaps to shape from the many divergent bills that already exist, a vehicle that could make some real progress in this area.

Justin Hendrix:

You set out about a half dozen general principles that are, I guess, constraints for the policy proposals. Can you hit a couple of the highlights of those?

Paul Barrett:

Sure. The central idea here is that the FTC already has and has had for generations, the authority to protect consumers from unfair and deceptive practices and acts in commerce. The notion here is that, that authority can appropriately be enhanced to hold social media companies accountable, so that they are not deceptive, so that they reveal how they do their business and how their practices and procedures affect users. And that they're not unfair, in the sense that they follow through on the promises that they're already making to their users, in the same way that a car company promises that there's an engine in the automobile. If you buy the automobile and you find that there's no engine in it, well, that's a little disappointing, to say the least. If you sign up for a social media platform and it goes into-- at great length-- how it protects you from exposure to unwelcome forms of content, whether it’s hate speech or it’s misinformation about vaccines, or incitement to political violence or porn, then you find that you and others are actually exposed to that type of content, well, that’s a problem. Then, if you find that you and others are actually exposed to that type of content, well, that's a problem.

It's not clear what the individual person can do in that situation, other than to stop using the social media service altogether. So, consumer protection seems to be a vehicle for getting at some of the widely felt problems connected to social media. However, as I try to be very blunt about in this paper, this approach requires one to think through very carefully, how federal involvement with social media will avoid first amendment violations.

The First Amendment, as we all know, protects against government interference with free speech. Facebook, if it chooses to curate its platform and chooses to host certain content but not other content, is perfectly free to do that. The First Amendment has nothing to say about that. However, if the government were to intervene and tell Facebook what it should or shouldn't have on its platform, that would be a big problem. We can't have that. That's a core sin within our democratic system, combining the power of government with the power to censor. Our response to that is to try to craft our proposal as one that is entirely procedural and not substantive.

So, we suggest that the FTC can require that, if social media platforms offer content moderation systems, if they describe policies and enforcement practices, that they need to be held to the promises they make. But the federal government is not going to, under our approach, prescribe policies. In other words, say what types of content should be on a given platform. It certainly isn't going to get anywhere close to making individual content decisions, deciding that this tweet or that post or the video over there needs to come down or be kept up. That's how we attempt to wrestle with it. Those are the big principles behind the policy recommendations we're making.

Justin Hendrix:

Just quickly on the FTC itself, I mean, we do have of course, a new FTC chairwoman in Lina Khan and presumably a stance at FTC that may be a little more activist than we might have seen in the past. Are there any stirrings that you see, towards any of these particular policy prescriptions or any activity that you're already aware of, in the direction of your recommendations?

Paul Barrett:

Stirrings, I think is the way to put it, but dramatic actions, no. As you say, we now have a Democratic-controlled FTC. It's an independent agency, but obviously Lina Khan, the chair, is appointed by President Biden.

She is well known for being a proponent of strong government regulation, but almost exclusively in the antitrust area, the competition area. That's where her expertise lies. She is pursuing, presumably very vigorously, antitrust actions concerning social media, chiefly the monopolization lawsuit pending against Facebook. What is going on, on the consumer protection side is a little bit less clear, at least to me, in terms of whether they are setting out in any new directions. In theory, some if perhaps not all of what we are proposing, could be undertaken now. The consumer protection laws exist. The FTC has prescribed the powers to investigate and ultimately enforce against violations in this area.

But I think it's really necessary, just given political realities and the history of the FTC... which is to say, an agency that has really been beaten down, both in terms of resources and in terms of court decisions in recent decades, that it gets basically a shot in the arm from Congress in the form of fresh legislation, that explicitly says to the FTC, your consumer protection authority includes the ability to provide oversight into this relatively new industry, where oversight of a sustained manner is lacking. I think with that kind of wind in its sails, the FTC could conceivably move forward. I think it's very unlikely that they're going to take any type of broad, meaningful action, absent that kind of legislative authorization.

Justin Hendrix:

The paper really does synthesize, in some cases, legislation that's already been put forward. It also makes recommendations for some new ideas that you think should find their way into legislation. Can you just summarize some of the key proposals, as you see them?

Paul Barrett:

Sure. The way you describe it is entirely accurate. The paper really divides into two broad areas. The first has to do with the FTC presiding over a much broader effort than any that exists now, to require the social media industry to disclose how its content moderation systems work, what kinds of data go into the algorithms that rank and recommend and remove content, to make that data available, both to the public and to qualified researchers. You would not necessarily make all of the data entirely public. There may be information that can't be released that way, because of privacy concerns, user privacy concerns. There may be aspects of how certain software is designed, that is a legitimate proprietary piece of intellectual property. But a lot of this information, I think could be released publicly. That would give users, I think for the first time, an understanding of why they see what they see, why and how certain content spreads very widely, goes viral and what mechanisms the platforms are using to decide what kinds of content basically gets elevated, gets promoted, gets amplified.

Once that's made public, I think there would be much more pressure on the platforms, to actually justify what they're doing. And if they can't justify it, to consider changing what they're doing. A good example might be the prioritization of user engagement, which is kind of the foundation of how social media platforms operate, how they were designed initially. To put in front of people, content that will keep them on the site and make them interactive, make them like and share and comment on things. But unfortunately, as we've seen, the kind of content that has that sort of engagement pop to it, is often very sensational content, that provokes emotions, fear, indignation and so forth. Out of that, you end up with the kind of harm that we have observed, whether it's hate speech or the promotion of political violence. So, a big dose of transparency. And in that regard, as you said, I'm really just digesting ideas and synthesizing ideas that have been proposed in various pieces of legislation. Try to pull those together and describe how they could be made as ambitious as possible.

The second big area, which I think is a little bit beyond what you see in the pending bills, has to do with the FTC requiring that platforms that have content moderation systems, deliver on what they're promising. In other words, if they're telling users that they approach content moderation with a certain degree of vigor, accomplishing certain thresholds of removing content that they've defined as being inappropriate for their sites, that they actually do that.

That they have clearly articulated the policies they're instituting, made clear how they're enforcing those policies and justifying the enforcement decisions they're making. And making sure also, as a matter of due process, that there are clear, open and effective mechanisms for users who are objecting to particular content decisions, to appeal those.

I would not have them appeal those decisions to the government. The government is not sitting as an appellate court, second guessing particular decisions. It's making sure that the system exists as the company has promised that it exists. Not entirely different from say a business that says to you, "We have thorough cyber security here. If you give us your credit card and possibly other personal data, we will protect that and no one else will see it." If it turns out that company has made those promises and presumably, consumers have purchased their services under that assumption, that the protection is not there and in fact, the credit card information is vulnerable and has actually gotten into the hands of identity thieves or Russian cyber punks or whoever, that's a problem. That's a situation that the FTC can take action on.

In fact, the FTC has taken action on just such cases, on many occasions over the years. By analogy, I'm reasoning that it's not an intolerable jump, to move from that type of enforcement case, to making sure that Facebook or YouTube is accountable for the promises that they've made to their users.

Justin Hendrix:

You also make some recommendations about how to resource the FTC differently, to deal with these platforms. Including one particular recommendation that has already, I understand, made its way into proposed legislation released right around the same time as the paper I believe.

Paul Barrett:

Yes, that's right. Representatives Lori Trahan and Adam Schiff have just in recent days introduced a bill called the Digital Services Oversight and Safety Act. Which I think is a very promising piece of legislation, that captures a number of the aspects of what I'm advocating and goes into a great deal more detail.

One of the central features of that bill and one that I am enthusiastic about, is the creation of a dedicated office or bureau within the FTC, that would be specifically tasked with overseeing social media companies and possibly certain other digital companies, that would have a substantial staff.

The Trahan-Schiff bill refers to a staff of 500 and even breaks it down into certain categories of expertise, all of which looks, again, very promising. I think they've done a very serious-minded job, looking into that, with a substantial budget of some $500 million annually.

I think a move like that is going to be necessary. Going back to the question we were talking about earlier, I don't think the FTC, as currently structured, is likely to be able to mount a sustained effort to oversee this very complicated industry, unless it has those kinds of resources and has the mandate from Capitol Hill to set out in this direction.

Yeah. I'm very glad that legislation seems to move in parallel with the proposals that I've made.

I want to make it clear that I did not inspire them in any regards. More of a coincidence, a happy coincidence, than it is my getting any credit for anything.

Justin Hendrix:

The Trahan-Schiff bill goes into a great deal of detail in fact, about what this oversight entity should look like, this new bureau, even going down to naming the types of expertise and the number of types of experts that it would require to do its job.

Paul Barrett:

I think it's ambitious in that regard, and I think that's laudable. Of course, one has to acknowledge that there will likely be some resistance to adding muscle to the FTC, from the Republican side of the aisle, but that's going to be true with almost any regulatory proposal, pertaining to almost any industry. So, you just have to try to wrestle your way forward.

I think, particularly on the topic of overseeing a really systematic program of disclosures and research, done by certified academic researchers, the Trahan-Schiff bill has a lot of really good ideas.

So do several other pieces of legislation, frankly, that aim in very much the same direction, even if they have slightly different methods or procedures in mind.

Justin Hendrix:

You do make some comment as well, on the proposals to remove liability shields. The Section 230 carve-outs, essentially, around particularly harmful content. Which of those are you enthusiastic about? Which ones are you less so? Those are the more controversial ones, of course, that have been put forward in Congress and normally the ones that draw the most critique.

Paul Barrett:

Right. That's exactly right. The debate over how to rein in social media has, over the last... it's now four or five years, revolved around, in an almost eccentric way, the question of the future of Section 230. Which is, of course, a provision that protects social media platforms from being sued, in connection with content that their users put on the platform and for decisions that the platforms make, in terms of taking material down or not taking material down.

The argument has been over, to what degree is Section 230 providing too much of a wall, that behind which the social media companies can hide from accountability?

But it's accountability in the form of lawsuits. So in fact, while I'm not dismissing it as an irrelevant or unimportant question, reforming Section 230 does not lead one to any kind of systematic government oversight or for that matter, systematic oversight by anybody.

It exposes potentially, companies to liability, for certain categories of lawsuits, that right now, they are for the most part shielded from.

So, I see reforming Section 230 as an important issue, but as an adjunct to the approaches we've been talking about so far, an adjunct to enhancing consumer protection authority.

I've two basic thoughts on it. One is that, Congress should make clear that if the FTC is going to be getting a little bit more energetic in this area, that its enforcement actions will not be blocked by Section 230. So, there'll be a clear path for the FTC to act, if it feels it's justified to act, under the consumer protection law.

Second, as you said, there are a number of proposals for carving out exceptions to Section 230, that would, stated in an affirmative sense, allow lawsuits in connection with certain categories of harmful content, to move forward against the platforms.

So, you could have a carve-out for lawsuits that say, the content on platform X contributed to material support for terrorism. But since we can't get our hands on the terrorist bad guys, we're going to sue the platform instead.

Now we have an exception to Section 230, that allows that lawsuit to go forward. Not that the lawsuit would necessarily prevail, but that it could get on track. My view of that is that, if Congress can come to some type of consensus on a couple or three categories of harmful content that lawmakers believe that they need to create as an extra incentive for the companies themselves to be vigilant about that kind of problematic material, then they should do that.

I mean, I don't think that solves the overarching problem, but if there's material related to terrorism on platforms, that could be one example. If hate speech is what you're concerned about, that could be another legitimate example. Exactly which carve-outs are the best ones, in some ways is a somewhat arbitrary choice.

I think that finally, on this score, there is legislation, several bills actually, that talk about reforming Section 230, but that train their attention on content that the platform has algorithmically amplified. In other words, the platform could be held liable for a certain category of harmful content, but only if the platform had recommended or otherwise amplified the content, as opposed to merely the content residing on the platform.

I think that's a very reasonable limitation to the adjustment of Section 230, because what you really want to have the platform do, is to be sort of self-conscious and aware of what it is doing to amplify harmful material. The fact that some no-goodnik may have posted something that the platform happens not to have caught initially and then it does some harm, seems like a lesser problem than the platform actually intensifying the problem, by taking a piece of harmful content and blasting it out into everybody's feed.

Justin Hendrix:

Of course, some of the bills that do propose carve-outs for Section 230, also come with other problematic bits and pieces, which make them maybe harder to consider.

Paul Barrett:

That's very true. That's at least in part, a reflection of the fact that there are very different motivations behind the 230 bills, to categorize them very crudely.

You have Democratic-sponsored bills that are motivated by a concern that the platforms are not taking down enough dangerous material.

Then you have Republican-sponsored bills, that are motivated by the perception anyway, that the platforms are biased against conservatives and are taking down too much stuff. So, more content should be left up.

In large part, because of the clash of those motivations, you have kind of the demolition derby of Section 230 bills that we've witnessed over the recent years. I'm not sure whether Congress is going to be able to sort through all of that and come up with something that makes sense. Although I think that, it is possible that we would see a very narrow bipartisan bill, that created an exception to Section 230 liability shield, in connection with something like child pornography.

There's a bill in the Senate that has actually made it out of the Senate Judiciary Committee that focuses on that. And there, I think you can find bipartisan agreement.

Justin Hendrix:

Paul, anything that didn't make it into the paper, that you find yourself thinking about, when you think about the Federal Trade Commission? I mean, you have some focus on some of the civil rights concerns about the platform. There's some very obvious discrimination that goes on, on these platforms, with regard to their advertising in particular and other things that have resulted in very straightforward discrimination for consumers.

Paul Barrett:

Yep.

Justin Hendrix:

If you had another couple of months to dig in, what would you focus on?

Paul Barrett:

Well, yeah, I'm not sure I would do that much better a job with another couple of months. I may have shown you the best I could do here, but on the subject of discrimination in particular, of course, you're absolutely right.

For years, there've been problems on the very popular sites, Facebook and others, in areas like housing. Where people posting information about housing opportunities, do so in a discriminatory way or filter applications in a discriminatory way.

Some of that's been well documented. And Section 230 has, in fact, been a hindrance to rid platforms of those problems. Because fair housing advocates have had to go after the individuals who post the allegedly discriminatory material, as opposed to going after Facebook and saying, "You, Facebook, have to be held accountable for all of this on your platform. Figure out a way to get it done."

It is conceivable that there would be a very good basis for exposing Facebook to that type of liability and breaking through Section 230, for the purposes of discrimination-related civil claims. That would be, it seems to me, a reasonable step forward.

The one other thought I have, which I mentioned briefly at the beginning of the paper, but I think it's really important to remember, particularly once you've gotten into all the details, where it can kind of get a little thick and thorny, is that none of this ultimately will work, unless greater industry vigilance is in fact incentivized by the government activity.

It's just really important to remember that we cannot propose that the government will police social media platforms, whether it's today's social media platforms, in two dimensions or tomorrow's, with the metaverse and all of that.

In our country, the government does not go after people who say nasty things on the street corner. Unless you're threatening somebody with violence, you're free to say damaging things. You're free to say that vaccines don't work and you shouldn't get them, even in the middle of a lethal pandemic and so forth.

The type of oversight that I'm proposing, the type of oversight that the various bills pending in Congress are proposing, all of those in the end, are at the most, creating incentives for the companies themselves, ultimately to behave better. To be more vigilant, to take more responsibility, and possibly to reconsider and rebalance the relationship between their profit motive, what they need to do to get the advertising dollars that they want, against public interest in the safety of their users and the potential harms to democracy from the widespread use of their services.

We have seen those bad side effects. That's why we're talking about this. Many people have called on the companies to reform themselves. The companies have frequently done a kind of a soft shoe dance. A step forward, but I'm not going to go there and offered all kinds of excuses and temporizing.

That's why I think government involvement is important. But in the end, it's going to be, only these companies have either the technological capacity or in a constitutional sense, the authority to say what goes on, on their platforms. I think that's a really important point.

Justin Hendrix:

I don't know if we'll ever quite get there, with these current platforms, but-

Paul Barrett:

May not. I think these problems are only going to get more complicated with the new generation of platforms.

I wrote an op-ed piece about this, to go along with this paper, in which I pointed out that the strongest argument for the government building capacity now to do oversight of social media, is the next form of social media, the metaverse, which is going to be even more complicated to supervise. I mean, you're going to have avatars talking to each other. Can artificial intelligence listen to them? Will it be able to detect problematic conversation of that sort?

Now, having automated systems keep track of what's going on in text exchanges, okay, they figured it out, to a lesser degree with images. But what about the communication in this new lifelike metaverse, where these expressions are going to be fleeting?

It seems to me, that's even more of a challenge. If you're worried about people planning a January 6th-style insurrection on social media today, it seems to me, the danger's even greater tomorrow. Who will be looking over their shoulder, and how?

Justin Hendrix:

Paul, thank you very much.

Paul Barrett:

My pleasure. Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics