Home

The Sunday Show: A Conversation with Mary Anne Franks

Justin Hendrix / Mar 13, 2022

Audio of this conversation is available via your favorite podcast service.

During a Senate Committee on Homeland Security and Government Affairs Committee hearing on Social Media Platforms and the Amplification of Domestic Extremism & Other Harmful Content on Friday, October 29, 2021, Senator Ron Johnson (R-WI) aired grievances about social media appearing to him to biased towards liberal interests. When he finally formulated a question, Dr. Mary Anne Franks, a Professor of Law at the University of Miami, answered him. I tweeted a clip of that moment, which went viral in part because Senator Johnson appeared to sheepishly leave the room following that response:

In a hearing in the Senate Committee on Homeland Security & Gov Affairs today, @ma_franks had a sharp response to Sen. Ron Johnson, who pushed the idea that social media moderation favors the left. She noted the asymmetry of the problem on the right as it relates to extremism. pic.twitter.com/nsPrxMplMV

— Justin Hendrix (@justinhendrix) October 28, 2021

Franks is Professor of Law and Michael R. Klein Distinguished Scholar Chair at the University of Miami School of Law, and is an expert on the intersection of civil rights and technology. She is an Affiliated Faculty member of the University of Miami Department of Philosophy and an Affiliate Fellow of the Yale Law School Information Society Project, and author of an award-winning book, The Cult of the Constitution: Our Deadly Devotion to Guns and Free Speech from Stanford Press, published in 2019. In addition to her academic responsibilities she is President and Legislative & Tech Policy Director of the Cyber Civil Rights Initiative, a nonprofit organization that combats online abuse and discrimination. In 2013, she drafted a model criminal statute on nonconsensual pornography- “revenge porn”- which has served as the template for multiple state laws and for proposed federal legislation to tackle the issue.

This month, I had a chance to catch up with her about her ideas, her work, and her critics.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

So Mary Anne, I just had the opportunity to listen to a speech you gave at UNC, where you took folks through your thinking. And I was really compelled by the diagnosis you gave of how free speech has come to be separated as a concept from its original intent in the United States, and how tech firms have played a role in that. Can you lay that out for the listener, how you see where we've got to in this moment, 20 years into the social media revolution?

Dr. Mary Anne Franks:

Sure. So the gist of that talk was focusing on what I call sometimes the commodification of free speech. And there's a couple of different aspects of it that I think are important. One is, as you say, there's a kind of underlying foundational sense of why we have the First Amendment, at least in the modern sense. And it's usually tied to qualities, that is qualities and values that are important to a democratic society. And they're usually cited as true autonomy and democracy, or some mixture of those. And what I find troubling about the way that free speech practice and theory has developed, is that it's really become unmoored from those foundational principles, and becomes more about speech for its own sake and a kind of version of speech that is that commodified in the sense that it seems like the more you produce of it, no matter what the quality is, the better things are.

And then you have the intersection of that view with the internet and being dominated by a handful of really powerful corporations who clearly have their own views as to the profitability of free speech, and are more than happy, or have been more than happy up until maybe very recently, to promote their product as being something like free speech. And how many of us have really bought into that? And so what I was focusing on in that talk is that aspect of it, the idea that we have allowed major corporations that we should normally be suspicious of, because they are after all for-profit entities, we've allowed them to colonize our imagination when it comes to free speech. That free speech doesn't feel free unless it happens on a social media platform. And if anything obstructs our path towards expressing ourselves on a social media platform, we think that is censorship. And I find that that place that we have arrived is really very troubling.

Justin Hendrix:

So in your book, I think you refer to this as the "cult of the internet."

Dr. Mary Anne Franks:

Yes. I sometimes refer to it as the cult of the internet because it does seem to have... When I say it, the kind of dependency that all of us now have on social media in particular, but also I think importantly Google Search and other kinds of products that are related to the internet, have really... They develop or encourage this kind of dependence that I think is very emotional, very much about people's passionate attachments to certain ideas and about their own self identity, and that that has cult-like properties. And that's the same way that people have cult-like veneration of the First Amendment.

Justin Hendrix:

So the book I should mention is called the Cult of the Constitution, and in the book you're taking on the First Amendment and the Second Amendment in particular, born out of an idea around the dangers of constitutional fundamentalism. How do you think of that in relation to these concerns about speech?

Dr. Mary Anne Franks:

Well, the basic idea behind constitutional fundamentalism is that it's a lot like religious fundamentalism. That is, you have a document that is written by human beings, but is treated as though it were some kind of semi-divine document, as though it's divinely inspired. And because people have that kind of veneration for it, criticism of it becomes kind of heretical. And people's very self-interested interpretations of that document become the kind of dogma. So if you are powerful and you have a certain interpretation of the document that works for you, then you have a tendency to characterize anyone who disagrees with you as not just being someone who disagrees with you, but someone who hates the constitution or doesn't love free speech, or doesn't love this country.

And so constitutional fundamentalism is that quasi-religious attachment to the constitution, in particular the First and Second Amendments, which then merges, I think, with that kind of veneration we have for social media and the internet, partly because so much of the promise and the early days of the internet was that it was going to be the pathway to genuine freedom of expression. It was going to unlock all the potential of communication and truly democratic discourse. So I think all of those things got sort of merged together so that you have a kind of veneration for the First Amendment, you have veneration for the internet, and combined, you have this very fetishistic attachment to what many people think of as their identity as free speakers or as free people.

Justin Hendrix:

So in the US, we obviously have a lot of regard for our own constitution. There are a lot of folks out there who would look at even a title of a book called the Cult of the Constitution and find that abrasive, or that it grates against their identity of themselves as freedom-loving Americans. You've got a lot of critics. Folks like Jonathan Turley tweeted recently, "It is now an article of faith to demand censorship or speech regulation in the name of social justice. University of Miami's Mary Anne Franks has a simple solution and the Boston Globe"-- where you had published a op-ed-- "wants people to consider it. Just gut the First and Second amendments." Is that what you want?

Dr. Mary Anne Franks:

No. And one of the great ironies of how much, I suppose, blow back that particular op-ed received is how much it exposed that once people are really worked up about something in that kind of passionate sense, they don't read very well. And so what I think many people completely missed about the entire project in the Boston Globe was that this was a thought experiment. This was not an actual proposal to go and amend the First or Second Amendments, this is a thought exercise. And it's a way of trying to think around or try to come at this with fresh eyes. If we could think about what would really get us the most free speech for the most people, what would our First Amendment look like? And so I get that some people find even the idea of imagining something other than the First Amendment is deeply painful and upsetting, but it in many ways sort of proved my point, that this kind of highly emotional irrational reaction is the kind of thing that happens when people are that sensitive and that fundamentalist about a document.

Justin Hendrix:

So you have a similarly revisionist perspective on Section 230, and have drawn similar criticisms. Julian Sanchez tweeted, "Mary Anne Franks' dumping on 230 as a special protection for an 'industry', which is importantly misleading. It protects a category of conduct for businesses and users, not just social media companies." A lot of folks think you're wrong on Section 230.

Dr. Mary Anne Franks:

They do. And then I do think, that again speaks to that there are disagreements that people might have, and there are plenty of very smart, very well-informed people who disagree with how we should deal with Section 230 and what kinds of reform, if any, we should actually have. Disagreement is good. I mean, this is part of what will inevitably come out of a debate over a law that's had so much impact on all of our lives. But I do think again, it's notable that there are certain criticisms of my position or the proposals I've made that seem to go beyond saying, "I don't agree with this." And seem to go much more towards, "This person is terrible. This person is stupid. This person hates freedom."

And again, I think every time I see that reaction, I can't take that person seriously as a thinker because there are ways to disagree that are based on better arguments, that are based on different perspectives. But this kind of sweeping sense of, "Well, this person understands nothing. And there's no way any of this could possibly have any kind of plausibility." I just don't take that seriously as a kind of critique.

Justin Hendrix:

Some of your critics are quite harsh. At a recent congressional hearing, you testified alongside a number of individuals who have appeared on this podcast, including Ambassador Karen Kornbluh. And I was struck by some of the criticism that was levied against you and against her in real time on Twitter, by some folks who I thought should have known better. Is that just sort of, I don't know, standard at this point for you?

Dr. Mary Anne Franks:

I think this has always been true. I think what has changed, maybe let's say in the last few years, is that because we've just seen so much confusion between being extremely... Let's say that we're mistaking being insulting for being sharp. So there's so much validation for that on social media particularly. And women are always the favorite targets of this kind of critique if we want to call it that. So I'm not surprised that there's much more of this kind of performative insulting, this performative, you know, "These people are the worst." As if we didn't have credentials, as if we didn't have perspectives. That really, of course, we might think that we have advanced beyond the place in society where we sort of dismiss people because of their gender, but I don't think we have. And I think the internet just reminds us of that every day, that when it comes to women having opinions, it's never just that the substance of their ideas that's being debated, it's always something else and something quite frankly irrelevant to the matter at hand.

Justin Hendrix:

You also do have some critics-- who I think are maybe slightly better behaved-- from the left who worry about whether your critique takes into account concerns of groups such as those supporting sex workers that were harmed by SESTA-FOSTA, and activists concerned about the impact of Section 230 changes on organizing. How do you address those concerns?

Dr. Mary Anne Franks:

Well, I certainly think those are legitimate concerns. I'm always a bit confused when the conversation about SESTA-FOSTA for instance keeps coming back, because I was never supporter of that legislation. And it's interesting that there seems to be this kind of tarring with a certain kind of brush. When you say you think Section 230 could be improved, that there are some people whose go-to point is always, "Oh, look at SESTA and see what a disaster that is." But I don't think that proves very much, because SESTA is a very peculiar way of trying to reform Section 230. And so I would much rather have a discussion on the merits of what I'm actually proposing or what I think would be a good idea. And very much my viewpoint is informed by the work that I do with vulnerable communities, in particular women and people of color.

The work that we do at the Cyber Civil Rights Initiative is expressly about ensuring that technology does not erode the civil liberties and civil rights of marginalized groups. So I know that there are different views on how best to do that, but I certainly think that anyone who is aware of the work that my organization does and the work that I do, that that is certainly always my focus-- how is this going to impact the most vulnerable groups, how to be aware of things like unintended consequences for what seem like well-intentioned reforms. So I think I would say, yes, those are of course legitimate criticisms, but they're the kinds of guide rails that I have been dealing with probably since I've started in this work, and so I'm quite aware of them, but may just have different feelings or different perspectives, I should say, on how best to protect those rights.

Justin Hendrix:

So let's talk a little bit about some of the Section 230 proposals that are out there now that you do think of as potentially positive. Are there particular pieces of legislation that are attempting to kind of introduce or carve out certain liabilities that you think will move the ball forward in a positive way?

Dr. Mary Anne Franks:

I guess the caveat here would be that I don't think I've seen a Section 230 bill that I have been really enthusiastic about across the board. And I could have missed a few because there have been a lot. And I do think it's important always to kind of separate for myself the categories of is this a bill that's getting at the actual problem of Section 230, which in my view is completely in the provision that has to do with leaving things up versus the much more conservative Republican led fight against (c)(2), which is the provision that protects companies for taking things down within that realm of legitimate attempts to try to get at (c)(1) that immunity there. I think that the Safe Tech Act has some aspects of it that really are interesting and could change some of the conversations and some of the deadlock that we've seen before now.

And in particular, and this will sound, I guess, in some ways, not surprising that the major thing that I like about the Safe Tech Act is the recommendation that I have made, which is that Section 230 be clarified in the sense that its protections only apply to speech as opposed to information, which is currently the word that is used in Section 230 and its operative provisions. This is one that I just think it should be a kind of easy fix, because the very strongest arguments in favor of a broad Section 230 immunity only makes sense if what we're talking about is really speech. We're trying to protect some underlying First Amendment value. But the way that Section 230 operates right now, but with that use of the word information, is that a lot just gets assumed to be speech that we should really be having an open conversation about whether that's speech or whether it's conduct, or even if it's speech is a protected speech.

So a really simple and useful edit would be to take that word 'information' and replace it with 'speech,' which the Safe Tech Act does. The other thing I think is really good about the Safe Tech Act is that it's trying to get at through a series of exceptions. It's trying to get at some of the worst abuses of Section 230. So it's got civil rights issues, it's got wrongful death claims, and it's doing a lot of carve outs. Now, I am in sympathy with that intention because I think that you can point to several categories of conduct and say, "That's really bad." And if you're going to make exceptions for sex trafficking, right, to say nothing of the original exceptions in Section 230, which are violations of federal criminal law and intellectual property law, why not have all these other exceptions too?

So I think as far as being principled, that there's no reason to think that the list that the Safe Tech Act gives you is any less justifiable than the existing carve outs, I think, for Section 230. My concern is that the carve out approach is unwieldy. It's really difficult for anyone to understand on a first read, difficult then for entities to conform their behavior to the dictates of that revision. And also I think that the number of exceptions really points to just how fundamentally flawed Section 230 is. And I'd rather get at the fundamental flaw than try to chase down after categories that we think are particularly egregious.

Justin Hendrix:

Is there a kind of intermediate liability regime in the world that you admire that you think is better than what we have here? Are you at all aware of or interested in what the Europeans are trying to do Digital Service Act? Is there a better path forward than what we've got?

Dr. Mary Anne Franks:

I can answer the last part first, which is to say, I think there must be better ways for it than what we've got. But as to which particular model is the best, I want to be a lot more up to speed on what it is that certain countries are doing or Europe is doing before I would give commentary about whether I think that they are necessarily the best way forward. But I am fairly well convinced that Section 230 is one of the worst ways that we could be approaching this problem. I think what really gets lost in the debate sometimes is that the revisions are portrayed as, "Well, look at all the things that are going to happen that will be negative if you have this amendment." And that's true so far as it goes. There's no revision to Section 230 that isn't going to result in the loss of some speech, or at least a reduction in volume.

And the problem is when people think that ends the conversation, instead of saying, "Compared to what? What is happening right now?" There's this tendency to rely on slippery slope arguments in the Section 230 context that almost exactly mirror the slippery slope arguments in the First Amendment context generally. And what that ignores, well is two things, one is that slippery slope arguments are logical fallacies, we shouldn't rely on them. The other thing is, if you're worried about over correction of speech, you have to care about the balance on the other side, which is what happens when you under correct. And if you are simply going to ignore the cost that it, that it exacts on the most vulnerable people, organize paid campaigns against women, deep fakes, doxing, revenge porn, insurrections, terrorism.

If you're going to ignore all of that, what you're really saying is we can't do any better than what we've got right now, it's to say all of that has to be the way that it is, and getting worse with every day and with every new conflict. So are there better models? When you ask Europeans if they have freedom of speech, most of them think that they do, and yet they are much more restrictive when it comes to liability. When it comes to the version of freedom of expression that many European countries have, it is much more sophisticated than the United States' version, because it recognizes that it's not a free speech versus privacy or dignity or autonomy. It's rather, free speech relies on those concepts.

And if we look at objective markers of where those countries stand in relation to the United States in terms of the kind of indications of how democratic and how free they are, look at the press freedom reports. The United States is nowhere near the top 10, whereas Scandinavia is, and they have a much different approach and a nuanced and more sophisticated approach, which means also a willingness to be a little bit more restrictive about certain kinds of speech and a little bit more generous when it comes to assigning liability to intermediaries. So I think looking at those countries and trying to figure out how are they striking that balance between freedom of expression and privacy and dignity and all those other things, especially if they are retaining their sense of democracy better than we are, then we should be taking some notes.

Justin Hendrix:

So this partly sort of bound up in ideas around collective responsibility, or even how we kind of connect law in the real world to law in cyberspace?

Dr. Mary Anne Franks:

Exactly that. Before one gets into the kind of details about Section 230 does this or that, I think we back up for a moment and we think to ourselves-- we as average citizens-- think about how things work in the physical world. That is, if we're talking about a murder, if we're talking about a robbery, if we're talking about a drug operation, the idea that the only people who are responsible are the people who pull the trigger, or the person who actually is at the very end of the transaction, that they're the only ones who have any responsibility. Or even for someone who falls in a grocery store, or someone who commits sexual harassment against a fellow employee. The idea that the only person responsible is the person standing right there doing the last act, goes against everything we know about law.

We know that in the law, we can assign responsibility according to people's states of mind and their contributions to a certain type of bad act. And that doesn't even have to be intentional. It's not just the person who wants to cause harm whom we hold accountable. We hold people accountable when they are just not paying attention. We hold them accountable when they kind of pay attention but they disregard risks of harm because they're so focused on benefit to themselves. So we punish people for reckless driving. We punish people for negligence in their stores if they don't clean up a spill that somebody could fall in. We would hold someone criminally responsible if we found out that they were aware that criminal activity was going on in their premises and they did nothing to stop it and even took a little kickback.

So we know this is true everywhere in the physical space. And once we remember that that's how it is everywhere else, that's when I think it should become clear to people how dysfunctional and odd Section 230 is for the internet, because it basically erases all of that and says, "Nobody is responsible for anything except for the last person to do the last bad thing completely intentionally." So I do think that part of that hypnotism, that religious effect that the internet has had on us has made us not realize just how bizarre that is. That is not the way the world generally operates, and it's not the way that the law generally operates.

Justin Hendrix:

One of the things that I am trying to follow and trying to understand is folks out there who are attempting to kind of do empirical science around harms online, network harms. We've seen some evidence of Facebook advancing this thought inside from leaked documents that we've seen, for instance, following January 6th and its assessment of its responsibility for what happened that day at the US Capitol. There are other computational social scientists, people into computational linguistics and other subjects who are thinking about hate speech and incitement to violence. Do you think that there'll be empirical evidence that will emerge that may eventually be used in court that could change the picture here?

Dr. Mary Anne Franks:

I think it's certainly possible because as you say, there's some really interesting and exciting research being done on the effects of what I think many people are invested in believing as speech or protected speech. And the more that you can show that the closer the connection becomes between the speech and the harm that is caused, the more likely it is that it's not going to get full protection. And that's been true from some of the classic categories that we've got in First Amendment law, all the way to new tort claims that people can make about, or new-ish, I should say, about the intentional infliction of emotional distress or invasions of privacy, so that the law can adapt to the responsibility and draw some kind of accountability from this, because there's some kind of connection you can show.

Because one of the biggest questions in legal liability generally is the causation question. How can you actually show that something as abstract as speech contributed to a physical or other kind of harm? So I think the more research we have on that the better. The concern I have is twofold. One is that we have a lot of existing research already on the harms that speech causes. And most of that has been ignored. We know that sexual harassment has certain types of harm, including street harassment, and the kind of changes it causes women in particular to have to make in terms of how they dress, which route they go to work, whether or not they feel comfortable speaking back to the person who's harassing them. We know all this. This is clearly an impact on their speech, and it's not a good one.

It's an impact on their freedom of expression and mobility. We know that there's research that shows that racial invective is correlated in minority communities with bad health outcomes, including high blood pressure and other forms of stress. So we haven't been... We, the powers that be, those who are making up their minds about doctrine, don't seem to care as much about those things as they care about the abstract focus on, "Oh, if we restrict speech, there's going to be all these chilling effects." Which incidentally is not an effect that has really got a lot of empirical basis as Professor Leslie Kendrick and others has shown. So that's one concern is, will people care?

The second concern is what kind of emotional or other types of harm are going to be used to show what kind of point? And I want to situate this in the context of the current battle that we're seeing in schools or the attack on schools over so-called critical race theory, over certain types of books, where you've got a very concerted agenda on the part of the right wing to say anything that makes a white child feel uncomfortable is something that should be stripped out of the schools entirely. And that's a really dangerous idea to say that we're going to weaponize emotions in this way. So we have to be very much on our guard that whatever evidence we look at for showing this type of harm, that we nonetheless keep in mind some kind of objective evaluation of that harm.

That sometimes bad emotions or feeling guilty is good, right? Because there's a larger question here about whether that's truly harmful or whether the more harmful thing is to pretend like it didn't happen. So we can't just let emotions themselves, or the feelings agenda kind of run away with the game here. So we need to make sure that our concept of harm is more sophisticated than that. It's not just about emotional impacts, but also situated in the larger context of some kind of harm to a value that a democratic society should hold dear.

Justin Hendrix:

So possibly maybe pushing even further out into the future, a lot of folks are excited about the possibility that we'll move from the two-dimensional internet that we're currently all having such a great time in, to a three-dimensional environment-- dare I say, the metaverse. Are there going to be, in your view, differences in the way we handle speech and conduct in these virtual environments? And is this a well sort of theorized space? Are we ready to legislate, or are we ready to apply Section 230 in such a context?

Dr. Mary Anne Franks:

I think it's very telling that the very companies that are under fire right now for the practices of the non virtual world are so eager to move on to the metaverse, are so eager to move on to the next big thing. That is to say part of the eagerness to say, "Look over there." Is to also say, "Don't look over here." And to distract the general public with shiny objects so that you don't think about how badly things are going in the non metaverse. Because this is the way the tech industry has tended to innovate. It's tended to move on from bad things by offering something else that seems really flashy and seductive. And in many cases, we're talking about our major companies currently holding dominion, offering it for free, so that you can get people hooked on it early and get addicted to it so that everybody suffers from a collective endowment effect when you start to try to take it away way, right? That's very clever.

But yes, I think the first question needs to be, if we haven't addressed any of the really drastic irreparable harms that have come about from just the regular internets, what do we think is going to happen when we start talking about the metaverse? I say start talking, people have been using virtual reality and augmented reality for some time, but it's becoming obviously more of the game here in the last few years, but yes, I think we can expect perfectly predictably that things are going to go even worse there, because exactly as you were saying before, about the research into harms, we've already seen that one of the major problems with the metaverse or with virtual reality is that the harms that are experienced there, like all the other things that one experiences in virtual reality, seem more real than reality in some ways.

The very thing that is promoted to the public as being wonderful about the technology, that you'll feel like you're actually there, which sounds wonderful if you're talking about a hike in the mountains or visiting a major art museum. The same thing is true when people experience violence. The same thing is true when people experience being harassed or spat on or sexually assaulted, and we're not contending with any of those things. It's the same tech industry kind of dogma that just says just focus on the positive. And when the negative comes out, then apologize for how you didn't think about that in the first go round and you will do something to address it. And we've already seen it happen with the metaverse, right? One of the first women to experience it says that she was sexually assaulted.

And then Meta says, "Oh, we hadn't thought about that. So now we're going to have a safety zone that's going to be the default." But this is just telling you everything you need to know, which is that either they haven't thought about it, or they thought about it and they thought, "It's better for us to pretend like this doesn't exist, and then we will catch up to it when someone points it out." And then we're stuck in the same loop and it's going to be worse because as a matter of cognitive impact, research has already shown that people don't respond to what they see in virtual reality as things they've seen, they experience them as things they've actually experienced. And that's going to have really, really alarming consequences.

And I don't know if the law is going to know what to do with that, because as speech and conduct become increasingly intertwined, that is an opportunity for us to rethink certain aspects of the First Amendment or to apply them in new ways. But it's also an opportunity for people to just go along with the expansion that we've seen in the last 20 years of what speech is, use the term speech for everything, let it kind of hide an abundance of sins in terms of what we would normally have considered to be conduct, and just allow greater and greater harms to happen without any real restraint.

Justin Hendrix:

I can imagine a court deciding that movements or motion or activity inside a virtual platform is somehow software or the effect of software, and not, as you say, real conduct. I can certainly imagine that.

Dr. Mary Anne Franks:

Yeah, and that's what I worry about, is that we're going to have this very narrow, very naive version of the speech-conduct distinction, that anything that isn't physical is going to be perceived as speech. And we're already stuck in that to some extent, and there hasn't seemed to be much appetite for reigning that in. And so we're really going to have to rethink a lot of things about how much we have associated real harm, that is to say, prescribable harm with physical impact. And I think there's a way to do it, but I do think it's a matter of orienting courts and legislators and the general public towards what we know to be true. For instance, the threats doctrine, that the reason why we have laws against threats, even though it's a very narrow doctrine, laws against assault as well, is not because of the physical impact, right?

Because the very definition of a threat is that the physical threat or whatever it is has not happened yet. When it comes to assault, it's not the battery, it's not the impact of what someone has done, it's the putting someone in apprehension that they're going to be hurt. We know this is based on our intuitions and on our experience, that those things are real harms. If someone threatens to kill you, if someone is making a move as if to strike you, if someone is waving a gun in your face, nothing physical has happened yet, you could say, but those things are harms. And those are the kinds of things that we should be and have always been able to limit. And so we could take this as an opportunity to be more sophisticated in our assessment of that distinction between speech versus conduct. I don't know if we're going to be sophisticated about it.

Justin Hendrix:

So looking out a decade from now, 20 years from now, are you optimistic that we'll sort through some of these things? That the United States can do some of the hard work that you suggest in your, your book on the constitution? Is there a route towards a better future, or are we likely to continue on the trajectory we're on in this sort of hyper-capitalist tech environment?

Dr. Mary Anne Franks:

I don't know if optimism is the right word for what I think. That is, I always want to believe it's possible that things can get better. What I wonder about is how bad things have to get before people finally turn to doing the right thing, because there's nothing left. And my fear very much is that whatever it's going to take to trigger that kind of actual critical, sophisticated reevaluation of certain dearly held beliefs, that something really terrible is going to have to happen. And I say really terrible in the wake of, you know, we've had the Trump presidency, we've had the insurrection, we see what's going on in Ukraine. It's not as though we're not seeing the consequences of this every day, and that hasn't been enough.

And that I think is what causes me some alarm is that none of these things have seemed like proof. If anything, people take the example of the Trump administration and say, "Ah, you see, that's why we have to be resolute in our views of the First Amendment and the internet, because of Trump." And that makes no sense. Or I should say it completely avoids the question of don't you think that the Trump presidency was in part a creation of social media? And we know that it was in some degree, and we have to take a hard look at that. We can't constantly sort of live through those things, barely survive them and say, "Oh yeah, it was the First Amendment that saved us." Because it wasn't. And a really elastic view or ever expanding view of the freedom that we think we're getting from a certain kind of First Amendment view and a certain type of commodified capitalistic free speech online, help contribute to the destruction of democracy in our own country.

So are we going to take that seriously or not? I think it much depends on how much we take seriously the experience of those people who have been saying for 20 years, "You have to address this issue." It has become very fashionable now to talk about Section 230 and its problems and maybe the First amendment needs some updating. And I say that because everyone seems to want their own kind of reworking of this. That is you now have Republicans who are thinking the Fairness Doctrine was a great idea or that the State Action Doctrine needs to go by the wayside. All very interesting.

But listening to those people who have been saying from the beginning, 'there are fundamental issues here about inequality and about the violation of civil rights.' And I emphasize civil rights over civil liberties here because those that I think who have been wisest in this field care about the civil rights implications of groups, of people who have been historically marginalized or exploited and not just focus on individuals and whether or not the individual feels free to say whatever the individual wants. Paying attention to their experiences and their knowledge about this, I think, is going to be crucial.

Justin Hendrix:

I guess a final question for you. On some level, is there a world where all of this focus on free speech and the social media context really does distract us from actual suppression of speech and really create an opening for authoritarians all around the world?

Dr. Mary Anne Franks:

Yes, I am very concerned about how much the debate over this is happening in a moment in the United States, just to focus on the US, at a moment where we're seeing actual censorship. That you've quoted a few people who accuse me of being a censor, which is a fascinating thing. That we've gotten to the point where you label a member of the public or a scholar as someone who's a censor, as opposed to looking at and criticizing the actual censorship by government officials. That's going on every single day. We are in the middle of a concerted attack on schools, on academic freedom by the government, by the government saying, "You can't teach certain ideas. You can't teach accurate history. You can't have these books in your library." We have states that are full-on trying to force social media to become state propaganda outlets, to actually say Twitter can't ban people the way they want to.

You've got Florida, you've got Texas, you've got all these places actually trying basically to allow all the government to take control over the means of communication. So you have all of this happening at a moment when other people are suggesting that the harms caused by online abuse, for instance, could be better addressed. And yet it's the latter who are actually being attack as sensors. So it is this kind of, don't look over here kind of game that we're forgetting what actual censorship looks like. And I get the fact that people want to say, "Look, powerful social media companies as private entities are doing a lot to affect our discourse, maybe more powerful in some ways than the government." Sure. But the fact remains the government has power that no private entity has. It is not the same thing when the government is trying commandeer social media, and social media is trying to enforce its own terms and conditions.

Those are not equivalent acts. And I worry about the dilution of the concept of censorship that we apply to anyone who says something we don't like, or is suggesting something that we're not on board with, and not reserving it in some ways for that thing that the first amendment was supposed to be concerned about, which is mostly about government control of speech and government punishment of people for speech. So, yes, I worry about the fact that it seems like a look over there kind of problem, and I worry that the focus, just to bring it back to the question, the commodification of free speech, I worry about all the things that the online version of free speech is depriving us of in terms of our actual moments of communication outside of social media, the conversations we have in classrooms, the kinds of things that we could say to each other in our homes, the kinds of things that happen in libraries, the kinds of that happen in actual public squares, not social media squares.

That the more we let the internet kind of dominate the concept of free speech, the more they're being vampires on everything else, journalism, mainstream outlets, education, all of those things are getting leached out while we're focusing on what can you say online? So I think on both ends that we are ignoring real censorship at our peril, and also that we are contributing to this really impoverished notion of free speech that is purely performative and purely limited to social media spaces.

Justin Hendrix:

Dr. Mary Anne Franks, thank you for speaking to me about all this.

Dr. Mary Anne Franks:

Thank you for having me.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics