Home

Donate

Evaluating Cries of Censorship on Capitol Hill

Justin Hendrix / Feb 12, 2023

Audio of this conversation is available via your favorite podcast service.

Elon Musk, the platform’s new owner, says that Twitter is "both a social media company and a crime scene." The crime he appears most concerned about is purported censorship by the tech firms, which he says occurs at the U.S. government’s direction. Musk, who claims he is leading a “revolution” against such practices, has given a small number of people access to internal Twitter documents- the so-called Twitter Files- including emails and internal message board communications that, in their selective release, demonstrate executives at the firm engaging with politicians and federal agencies on a range of issues, from COVID-19 to election disinformation.

This week, there were two hearings in the House of Representatives on this subject, including a Committee on Oversight and Accountability hearing titled “Protecting Speech from Government Interference and Social Media Bias, Part 1: Twitter’s Role in Suppressing the Biden Laptop Story,” and a hearing of the new Select Subcommittee on the Weaponization of the Federal Government that was intended to “discuss the politicization of the FBI and DOJ and attacks on American civil liberties.”

If we look past the conspiracy theories and legal gibberish, is there any 'there' there? Should we pursue reforms and require greater accountability and transparency around the interaction between platforms and government?

In this episode, we hear from three experts:

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

So I am so pleased to have the three of you here today. We're going to talk a little bit about government relationships with social media platforms, the extent to which we should be concerned about that. Of course, an ongoing topic of discussion this week on Capitol Hill and very much in the news media. I was struck by the words of Jonathan Turley in his opening to the hearing yesterday. "The Twitter files raise serious questions of whether the United States government is now a partner in what may be the largest censorship system in our history." So I think I'll start there. Largest censorship system in our history- what do we think? Maybe Mike Masnick, I'll throw it to you first.

Mike Masnick:

I think that's not just laughable but backwards. I think that Twitter and the rest of the internet and social media and all of these tools have allowed for more speech than ever before in our history and more people to speak out and more people to hear the speech of others than ever before in history. And the Twitter files showed absolutely nothing of the sort. I'm not sure which Twitter files Turley is reading, but they're not the actual ones that have been presented. What it showed was a very, very standard, actually incredibly competent and thoughtful situation in which a company that hosts user speech has to decide what it's going to allow and how to enforce its own rules, which is clearly First Amendment protected editorial functions. And there was nothing in the Twitter files to date. There could be more at some other point, but there's been nothing in the Twitter files to date that I think shows anything that goes anywhere near the level of censorship in involving the government requiring a site or compelling a private entity to take down speech.

There's been no indications of that. The things that have been shown in terms of the FBI involvement tended to be what I would think of as mostly reasonable suggesting that sending over accounts and saying, "Do these violate your policies?" And being very explicit in fact in the emails that you can do with this what you want. We have spotted these things, we think they may violate your policies, but it is up to you and Twitter then reviewing and sometimes listening to them and sometimes not, and not facing any consequences for either side of that. That is not censorship, that is just general notification and Twitter evaluating it under its own policies, which again, is perfectly first amendment protected speech. There's no evidence in the Twitter files of anything coming even close to what I think most people would define as actual censorship and therefore I have no idea where Turley's coming from other than that he's just very, very confused about what the Twitter files actually showed.

Justin Hendrix:

So I want to come back to perhaps that argument and other people who have made that argument, but Shoshana maybe come to you. You just hosted a great event at R Street Institute that I thought was filled with very reasonable perspectives on this question. But you said in your opening to that, "I think most people on both the left and the right of the spectrum, even if it's on different issues, can agree that government at different times has abused its power towards disfavored groups." So when you look at the Twitter files, do you see no cause for concern, cause for concern, cause for moderate concern?

Shoshana Weissmann:

For sure. No, I really appreciate it and that event was a long, long Shoshana process of nerding around this area. And I tend to agree with Mike as always, except ... and it's also frustrating to see some of the way Turley's gone with a lot of this stuff because I used to love, love his work before he found out social media was a thing that could get him attention in his work. And I'm like, "Oh, this is not the Turley I used to know and love."

But I think that the concern is good and I think it's a bright spot in the Twitter files that people are refocusing, wait a second, it's government coercion. That could be a problem. And even if people see it as more of a problem in the Twitter files than I might, I think it's still good that they're focusing on like, wait, is there coercion? What level? And starting to think through this stuff because as always ... someone of my view, I just think the focus needs to be on government, especially when you have congressman during I think that hearing and others saying, did you fire these employees that criticized me? Why did you allow this speech? That's the problem throughout these social media hearings. You've seen this. You see in certain cases one agency of government tell a platform that they should keep something up because they're monitoring it and another agency say, "Take that down because it's dangerous."

And I'm not saying these platforms are like, oh, poor little things, but it is an unreasonable spot from a governance perspective to put them in. But you have to have a coherent theory of enforcement there that agencies should be able to figure out what the other one's doing.

So with the Twitter files, I think a lot of it was overblown. I get that the general concern, the vibes, and I'm glad that people are refocusing there because I think that's really productive for policy in this space. But it's also just been an opportunity for clickbait, as a lot of things, but give and take, I still am kind of glad people are coming to the conclusions they are just because I want the focus to be on what's the appropriate role for government and content moderation and people realizing maybe they need to step away from that.

Justin Hendrix:

And I want to come back to some of the ideas that were discussed at your event, especially from Senator Lummis and the bill that she put forward. Mike, you wrote approvingly of I think a similar bill put forward in the house that would essentially do its best to draw some line or add some transparency between what government is able to essentially direct social media companies to do. But Darren, I want to come to you because you've got a sort of different perspective on this. You found yourself a subject of the Twitter files and I guess that gave you the opportunity to look at them closely and you essentially kind of come to this idea that perhaps they give us a way of thinking about the role of external parties, including government agencies in helping the platforms to do what they seem never to be able to do, which is to actually enforce their own policies and procedures.

Darren Linvill:

Yeah, absolutely. I was certainly concerned about a number of things I read in the Twitter files, but not at all the same things that Turley is concerned about. What concerned me is what I saw the complete lack of structure that the platforms have for working with outsiders, especially qualified outsiders that might come with a particular expertise or perspective or information that could be valuable to them. I think that these relationships are ... they're not ... I can see why the platforms might not want to seek them out, but it's definitely in society's best interest that the platforms work with outside perspectives, because the government and the platforms can work to keep each other in check, just like other perspectives.

The reason I was pulled into the Twitter files and my lab was pulled into the Twitter files was because we had been giving the Twitter information about accounts that we suspected were being run by the Russian Internet Research Agency. All of those accounts were suspended that we gave them over the years. It was hundreds of accounts, but we had genuine disagreements about whether or not to attribute those accounts to the Internet Research Agency.

And I think that those disagreements can be healthy, mean, you can see why the platforms wouldn't want to attribute something to the Internet Research Agency to Russia. It's just going to mean another story in the newspapers that will make them look bad and maybe drive more of their user base away. But having somebody there to call them out and engaging with them on these issues, I think of course these are important and there need to be ways that outsiders can communicate with the platforms we need to expect it and even desire it.

Justin Hendrix:

Mike, I want to come to you. Just when you play devil's advocate on the Twitter files in your mind, looking at maybe some of the things that were discussed in the hearing this week, the extent to which employees at these social media platforms might communicate with the FBI or other government agencies in ways that are difficult to track when there was some discussion of the use of disappearing or encrypted messaging apps for some of this communication, some of the communication does look extremely casual. Does any of that give you cause for concern?

Mike Masnick:

There are some elements that could give cause for concern. And to be honest, I expected to find a lot worse and to see a lot worse in the Twitter files than actually came out. I think that when you look at the details though, each of those concerns, it shows pretty clearly that Twitter was pretty careful. I do recognize that there was some sort of casual communication, but that's kind of natural when you're dealing with someone on a regular basis, that kind of thing is going to happen.

The use of encrypted communications, as far as I understand it ... and again I didn't work at Twitter, but from what I've seen, everything that has been presented about encrypted communications was for things that actually mostly required that sort of thing around information, intelligence information that the FBI was sharing regarding foreign operatives. And so they had a special system set up in which the FBI could send that information in a secure manner to Twitter.

And so to some extent, I think that goes back to the point that Darren was just making. I actually think that one of the things that the Twitter files did show was that Twitter actually did have a bunch of these relationships set up and Twitter had their ... I forget exactly what it was called, but they had their sort of trust and safety council in which they were working with a number of outside groups and they had these other connections and regular contact with government agencies, but they were very careful about it and very careful that it didn't reach that level that got to the Turley level of censorship or government intrusion into private activities.

And so the thing that's somewhat incredible to me is actually how much better Twitter came out of this looking ... if you actually read this stuff and understand, it felt like Twitter came out of it much better than I would've expected in that they really were balancing all of these very tricky issues. They were willing to talk to different organizations, different civil society, different academics, government agencies, but they were always very careful to make sure that in the end it was their own decision making and their own process for determining these things. And I think that's actually commendable and something that I thought they came out of it looking really, really good for those who actually who don't come in with a sort of preset determination of what the Twitter files must have said.

And so I do think that there are areas ... as Shoshana brought up, there are areas where there could be concern. And I actually, again, expected the FBI to be a little bit more engaged in trying to pressure the companies and was actually somewhat pleasantly surprised that they weren't. I do think that there is a point that goes over the line where the government is trying to coerce the companies into making decisions or they're really trying to pressure them. There are a few indications that maybe others in the government may have tried to put more pressure on the companies around certain disinformation topics. I think that is certainly towing the line. There have been a couple examples of members of Congress sort of demanding certain things and I think that is, again, borderline.

The determination and the determining factors whether or not there's coercion, if government officials are communicating with companies with information that they found or that they've seen or that they've discovered that they want Twitter to look at and they want to provide information and then they'll allow Twitter to make its own determination, I think that is the right level of engagement. The fear is when it goes over the line, but there's been ... I see no indication that it went over the line.

Darren Linvill:

Yeah, to add to what Mike was saying that I completely agree with this point that at the end of the day it needs to be the platforms that make these decisions about what's on the platform and what's suspended and where they take action. And you really see in the Twitter files then pushing back on that very issue. But you also see the value of the pressure, especially in the Twitter files when they're talking about the Russian operations on the platform. There's an entire thread in the Twitter files about Russian ... about how they first went public with the Russian operations on the platform and that took pressure from the government and also from other platforms. So it is a balance, but it's a balance that takes voices from the outside to be a part of a good balance.

Justin Hendrix:

Shoshana, one of the things that was discussed at your event that perhaps should maybe give us pause about the relationship between governments and social platforms was the international context. And certainly we haven't seen these American platforms necessarily always play well in other countries, whether that's India or other environments.

Shoshana Weissmann:

Yeah, so that's something that's really been interesting to me for a long time. Years I've been thinking through this ... not to say I'm the biggest expert. I'd always defer to the Center for Democracy and Technology here, especially, with their deep amount of knowledge here.

But if you look at various countries that just have fewer rights than we do in certain ways, it's easier for government to pressure them. And you have to think about where that line is. And in America we might have one view. We might be even almost a little more deferential to government because we feel like a lot of times they're trying to do the right thing. But if you look at China, where's the line? I think if we look at Disney as a company, I don't think that they're looking out for the best interest of the people in China when they still operate there.

But I think social media that try to are, because you can have this incredible democratizing force through social media. We've seen it all over the world and it's worth it if you restrict ... even if the government's like, "Okay, you can have your platform but you can't talk about this one thing," that's probably still worth it. But these two things, these three, you can't do this, you can't allow these kinds of people on. Then it might be that social media isn't going to have that right force anymore. And I don't know where the line is. I don't think it's always the same at the same time in the same country. I think that that line moves and there's a lot of considerations and maybe there's not always a correct answer every time, but I think it is important to think through it. And part of that is thinking through the standards, the standards we set.

So whereas in America, if the FBI has an ongoing investigation and might want on the DL some help from Twitter, you might think like, "Oh, they're going after terrorists. They want to make sure," but what if a terrorist in another country is someone advocating for women's rights? Do you still want them to be able to work with those agencies on the DL like that? And I think that's part of the importance of those barriers, that they should be able to maintain it in each country as they see fit and probably set up stronger barriers there, so that way things require a warrant.

Ron Wyden has this great bill saying that government can't get around third party doctrine issues and fourth amendment stuff by buying data. I think that's really smart to shore up the law there. But platforms have a huge role here to figure out is it worth it for us to operate?

One of the most interesting examples to me, if I recall correctly, it might have been Iran, where Clubhouse was operating and they got around filters because it wasn't recorded ... if I recall correctly I think also the ... well, recordings might stay in the platform for a bit, it wasn't really accessible to government. They couldn't sit in every room. But the question became what if they say, "Okay, you need to keep recordings, you need to give us these recordings." Then it might not be worth it for them to operate, especially with people using the real names, which is one of the incredible things about it.

And I think when we think in the context of American law, we always want them to cooperate with law enforcement far beyond what's legally required. But if we don't want them to do that very rationally in other countries in important circumstances, we might want to rethink how we engage with them and make sure that we're using the law appropriately that I think everyone here might agree that it's totally cool for the law enforcement to be like, "Hey, we're not telling you to do anything but free speech, but terrorist speech there, up to you, it violates your terms, but up to you,"

Or scammers, they often start out with fully legal speech. Maybe the FTC is like, "This is what we're seeing, you might want to look out for it. Up to you." And I think that can be okay, which is why some other legislation gets to me and I think transparency is the way to go because that way we can kind of all draw the lines together. But I think the international context helps us remember, if we want special privileges here, we have to think about how that's going to play as a standard in other countries.

Justin Hendrix:

Let's talk just for a minute about some of the legislative ideas that have been put forward, including, Mike, the proposed act that you wrote about, the Protecting Speech from Government Interference Act put forward by Rep. James Comer. And then Shoshana, Sen. Cynthia Lummis talked about the Senate proposal from Sen. Marco Rubio and a handful of other Republicans, the ... oh gosh, let me see, remind myself what it's called here ... the preventing restrictions and empowering speakers to enable robust and varied exchanges in online speech, or the Preserve Online Speech Act, as it's known. What do you make of these proposed bits of legislation and do they make sense? Shoshana, please, you start, and then Mike, we'll go with you.

Shoshana Weissmann:

Oh sure. Yeah, I'll let Mike jump in on a bunch of these. But I think that it's important that we don't go with the standards that tell platforms what to do. If they're telling platforms what to do whatsoever, it's just not appropriate. If they're like, "Keep the speech up, take it down," you're already violating first Amendment rights. So especially there's been other legislation that's like, oh, conservatives are censored, so their speech has to stay up, what's conservative? Then some crazy guy can sue and say it was just political speech even though he's a Nazi, stuff like that. None of that has any justification.

But transparency, even in reasonable ways and making sure we're doing it with the right way, limiting what government's able to do for kind of setting up legislative guardrails for those first amendments, senators for coercion, that's where we should be looking.

And it seems like Lummis has some real interest here, which I really appreciate and Jordan and McMorris have a bill on this. I think it goes a little too far though. I'm a little concerned about the way they draw the lines. I appreciate the intent, but I'm just worried that the FBI wouldn't be able to say, "Hey, we're concerned about this under their standard," or that the FTC wouldn't even be able to say, "Hey, new scam format, you guys might want to look out for this." And then even if there's no coercion involved, and I agree with Mike, I think that fine line is the coercion, feel free to tell platform stuff but no coercion or no why didn't you do what we said kind of deal.

Mike Masnick:

Yeah, and I'm basically going to say the same thing. I mean, my description of the Comer/McMorris-Rodgers bill was that it was not totally crazy, which still means it was a little crazy. It was definitely focused on the right thing in that it was saying the area focus is the government and what the government can and cannot do, which is within the constitutional limitations. The language in the bill probably goes too far, which is what Shoshana was saying, in that it says that they cannot influence or advocate that any third party does something. And again, as Shoshana said, that cuts out a lot of things that we actually normally think government should do. They should be talking about there's a new scam, there are new threats. That's what we look to the government for and what we expect the government to do. That's providing information that is useful and helps protect the public. And I think that's valuable as long as there's no coercion that says you have to take this down. If they're providing useful information, they should be able to do that.

And I fear that the way the bill is written that it would lead to that and it would lead to some level of ridiculous lawsuits. I mean you could definitely see scammers of some kind than suing over the fact that the FBI announces there's this new scam going around and then they could sue under this bill. And that seems like a ridiculous and very problematic outcome. So that's my concern. But I do appreciate the fact that unlike many of the other bills from both sides of the aisle, frankly, that this was focused on actually making sure that the government isn't going over the line, though I think the bill itself is a little bit too broadly worded.

Justin Hendrix:

And I should just mention that the Lummis/Rubio proposal is much simpler than the house proposal, or at least it's much shorter. It includes essentially one main operative clause around requirements to disclose when government entities request or recommend that a provider of an interactive computer service, moderate content, et cetera, et cetera. I want to, Darren, just come to you on maybe something that you sort of addressed at the end of your piece in Lawfare, which is just the sort of general sense of distrust in the platforms, the fact that to some extent that's operating here, there's a distrust of the platforms, there's a distrust of these institutions and somehow that's kind of animating things beyond the actual evidence certainly that we've seen from the Twitter files or from other sources.

Darren Linvill:

Well, and that's actually what's funny to me about this entire conversation, if you look at the response to some of the Twitter files threads, a lot ... critics of the government suddenly want us trusting the platforms. Then when the government's trying to tell them what to do, suddenly the platforms are the good guys when in fact everyone involved in this conversation has their own motivations and their own agenda. The platforms and the government and me as an independent academic alike, we all have our own agenda. We're right not to entirely trust each other, especially the platforms. They are for profit entities who at the end of the day are more likely going to do what's best for the bottom line.

Even if there are wonderful individual people working at the platforms, and I know that there are, because I've worked with some of them with the best of motivations, they still answer to the corporation at the end of the day and they may not be able to do the right thing because they work within a system. So yeah, I think there is a distrust of all of the entities we're talking about in this conversation, especially the platforms, but I don't think that's entirely wrong either.

Justin Hendrix:

Shoshana, at your event you had a Facebook executive who'd been working on policy there for years who also brought in some of the kind of complications to potentially doing more disclosure, more transparency. Can you speak to that, just the conversations you've had with platform policy executives in the extent to which they see a lot of, I suppose, nuance or complication in these things?

Shoshana Weissmann:

Yeah. And also a lot of my knowledge here is from Mike as well. I learned so much from Mike, so if he's repeating stuff I'm saying, I'm just ... it's because I was saying the stuff he's saying and I tend to agree with it, but there's a lot of legal barriers, a lot of gag orders or functional gag orders. And when you start to break it down, you start to realize, oh man, what can we even disclose here?

There's also all levels of complications. So let's say just for example, there's a senate office that is mad that there's a bunch of stuff against that senator online and he is like, "Hey, can you remove this?" Let's say it's AllTrails. I love using different platforms and let's say someone's just ranting against the senator on AllTrails and then they reach out to all trails and they want to disclose this stuff just in a normative way, not through government or anything, just they want to increase disclosure.

If they say, yeah, this senator asked us to take down negative content about him, then that senator's going to be like, "Whoa, why do you call me out here? I'm going to keep a closer eye on you now." And there's that incentive, like who do we want to upset here? So that's just one layer of it. And then there's ... they might have a great relationship with that senator who's being very helpful to them, not in a corrupt way or anything, but in a really, we're trying to work through policy things with you and that could ruin that relationship ... not that it should. I'm not saying that any of this is okay, just that it's reality.

So you have so many legal barriers to it, you have relationship barriers like that and then just scale and sorting. How do you decide what category this would go under? How do you disclose it? Do you want a target on the back of the staff or who emailed that maybe at the request of another staffer or at the request of the senator? That kind of sucks for them.

And a lot of times staffers have to do what the other staffers want to or what the elected official wants to as well so then you can paint a target on those people. And that just is frustrating. And then there's just ... there's actual ongoing investigations that might have orders where you can't talk about it, but there's also, if they disclose something, it might get in the way of an investigation or something serious like that without any legal order attached, but it could still get in the way there.

And then different countries, like we were saying, if in India they're like, "Hey, take down this criticism of our president," and then the platform discloses it, oh man, they are not going to be happy and maybe they lose the ability to operate there. And it goes back to some of the things I was talking about earlier.

And these are just a handful of layers. But it's interesting to think about because Google has a couple of case studies and one of their most interesting ones to me was a hospital saying, "Can you please take down this content?" And the reason was it violated their copyright. That is not why. No hospital's like, "Oh man, there are trademarks in here. Oh, you got to take this down." It was clearly something was occurring in the hospital that they didn't want people to see. There's no other reason for it. And that kind of shows you the level because governments run hospitals. Sometimes there's layers of this disclosure that get into layers of government you might not even think of. And there's relationships to consider. They're painting a target on their back, on staff's backs. There's a lot of complication. And then while it might be ideal to have every amount of disclosure, it's also a huge time burden. So it, there's a lot of stuff to think through here.

Justin Hendrix:

Is there any political alliance to be had between those mostly now on the right who would like to see greater disclosure based on the fewer of the Twitter, files and those perhaps maybe in the center or academics like yourself, Darren, who are arguing for greater data transparency, greater access to platform information. In some perfect world, could people maybe see these interests as the same or is that too crazy?

Darren Linvill:

I mean, I lost my idealism a long time ago, so yeah, it's probably too crazy. But looking at it with a level head, certainly there's a common ground here. One reason we do know so much about what happened on Twitter regarding foreign interference from all kinds of sources was because relative to the other platforms, Twitter has always been very transparent, certainly compared to Meta or Google, extremely transparent. And I've had a closer relationship with some of the folks that worked there as a result of that. That caused conflict at various times, but at least I had the relationship.

And so I think that there's absolutely a value for transparency for society. And I want to believe some of that transparency gave value to the platform as well. While the conversations about Twitter weren't always positive for Twitter, at least ... what's the old saying about there's no such thing as bad PR? I mean, people were talking about Twitter and we're still talking about Twitter. And it's not even a top 10 platform in terms of size. It's tiny compared to all the other major platforms. It punches way above its weight. And one of the reasons we're still talking about it is because it's been transparent over the years. So yes, there's definitely value for transparency and I think that value goes all the way up. And if we can get more people on both sides of the aisle to see that value, I think it would be in everyone's best interest.

Shoshana Weissmann:

I'll just add in too, that I think with the Twitter files, that's one of the benefits that I think that conservatives really started to see, "Oh wait, we might want transparency here, we might want limits here." But I think that re-shifted their focus in a way that's very helpful for people like me who want them to understand that ... let's say there is the worst kind of collusion between government and a platform where it's really just oppressing speech. That's a fourth amendment issue. That's government. It's not these platforms that are like, "Oh hey, come control us." Government's kind of known for sucking at running things. A lot of companies understand that and want to do their own thing and it's like it refocuses the conversation where I think it really needs to be.

So that's one thing that I think is helpful there. And there are a lot of people on the left who understand it. Ted Lieu is often tweeting platforms can do what they want. I'm going to criticize it, but in a normative way, not in an I'm going to come regulate you kind of way ... Ron Wyden as well. And I think there's enough democrats who get it that we can start to build that coalition around this issue more and then get to other issues like privacy and stuff like that.

Mike Masnick:

So I'd like to jump in on the transparency stuff as well. I think that transparency as a principle and as a concept is really important, and I think most people sort of recognize that. I think it gets really, really challenging and much more risky when it becomes a government mandate for transparency, and you begin to open up a sort of Pandora Pandora's box of questions and dangers and risks.

And there is some irony here in that ... Shoshana mentioned like private ... or investigations in process and things like that. And there was a huge battle which is now mostly forgotten a few years back where all of the tech companies sought to reveal just an aggregate number of how many government requests for data, national security letters and a few other types of government requests for information. So they wanted to reveal things like in the last quarter, we received 15 national security letters and no more information than that. And the government went crazy about it and said, "You cannot do that. That would reveal some important information that would ruin investigations," which I don't see how that's even possible.

And so there was this sort of fight between the companies and eventually the justice department and most of the tech companies came to an agreement, and I think it's an agreement that is way too non-transparent, that they're not allowed to reveal these things. They're only allowed to reveal them much later and in not as clear a way as possible. And Twitter was actually somewhat ironically I guess was the one company that continued to fight that battle and actually went to court and claimed that we as a company have a first ... and I'm speaking as Twitter, even though I'm not Twitter, I want to clarify that ... they said that we're a company, we have a first amendment to say just how many, we're not revealing anything that will ruin an investigation. We just want to say we received 15 national security letters last quarter and they lost.

And so there is some irony in the idea that now suddenly the same government that was forcing these companies to be more secretive in the past now suddenly wants them to be a lot more transparent.

The other element on the transparency side is that, again, I agree that the company should be a lot more transparent. I like hearing stories where they're working with academics and where they're working with civil society and they're opening up things and they're allowing these investigations to come forth. But when you start to get into the mandated transparency, there's a whole bunch of risks that come with that. I'm not as concerned as Shoshana about, look, if a senator is having a staffer contact a company and say this content is bad, I actually find that problematic. I actually find the company should reveal that if the government is requesting that kind of thing.

But there are a few different issues, one of which is that some of the transparency requirements that we've seen discussed in various bills and certainly in some of the state legislation that's come out really is clearly a content moderation bill in disguise because basically what they're saying is you have to reveal this stuff because we know that if you reveal how much content you're taking down or whatever, it will be embarrassing and therefore it will encourage you to take down less content or it will enable some sort of private right of action in which there will be a lawsuit that, again, is sort of a content moderation bill in disguise, which I think starts to get at the sort of first amendment issues.

So I worry about the bills that mandate transparency. There's also the fact that again, who can actually do that? Who can actually handle that? The tech companies were the ones that pioneered the idea of transparency reporting. Google was the first one that had a transparency report and then everybody started to follow. And some other companies ... the Telcos came many, many years later after people started mocking them. How come Google and Facebook and Twitter are all having transparency reports and AT&T and Verizon finally they started to come out with them.

So these things are happening, but when the government steps in and says you have to do this, it starts to get trickier. The one area where I do think we could have a lot ... we could get more transparency without that problem and allow for, especially on the academic side, would be some sort of legislation that protects academics that are doing scraping, to be able to go in and get data or to ... as this happened, build a browser extension and ask people to install this browser extension for the purposes of research that you're going to send to some university lab, all of the ads that you see or something like that kind of transparency.

I think making sure that that is allowed would be fantastic because companies like Facebook have threatened to sue NYU for doing exactly that, and I think that that side is problematic. But as long as there's methods there that enable the transparency without mandating transparency reporting that requires them to reveal stuff that might then create pressures for them to act in a certain way ... I think it's kind of a fine line, but there is a way to do it. I'm not sure that anyone is actually going towards the sort of safe version of that.

Darren Linvill:

I'm glad Mike made that last point about protections for academics because per my previous point about all of us having our own agenda, you should probably disregard everything I said about transparency because I definitely have an agenda when it comes to transparency as an academic. It only benefits me more than others.

Mike Masnick:

And I talk to a lot of academics about this and I totally get where they're coming from, but I do worry that some of the academic advocacy on this is very biased towards, of course they should open up for us. And it's like, yes, I would love to see the companies providing more data and more access to academics, but when the government is coming in and sort of mandating, it creates all sorts of risks. And remember the example I always give is that Cambridge Analytica, which is now held up, rightly or wrongly, as sort of the example of clear abuse of private data, that started as an academic project in which they wanted to access Facebook data for an academic research which then turned into a giant privacy scandal. And so I could see a situation in which academic access leads to more sort of Cambridge Analytica situations.

And so you have to have a way to balance that and make sure that you don't have the sort of privacy scandals or other problems or the sort of hidden coercion. That doesn't mean that academics should be in the dark or that company shouldn't be more willing to work with academics. Again, that's why I tend to think that certainly decriminalizing scraping or allowing academics to create browser extensions or other tools that allow them to get access to this information and that I think companies should be more open to helping the academics do that sort of thing. I think that really does benefit everyone. But I do worry about some of the proposals that are out there that some of which may even have some momentum in which the government is just saying, well, the companies have to open up to academics because I think there's some risks there.

Darren Linvill:

Cambridge Analytica is the worst thing that happened to an entire generation of academic research on social media. It's like the Stanford Prison Studies ruined a generation of psychology studies by making it harder for future researchers. Cambridge Analytica did it all to us today.

Shoshana Weissmann:

I want to add in just quick too, it's funny because I knew some of the people there and I had a friend who once interviewed with them and I know what they did in other countries is terrible. What they did in America wasn't super ... I mean it was bad, but it wasn't like ... they weren't effective at the things that they were claiming they could do. And when my friend had interviewed there, he was telling ... this was years and years ago ... he was telling them about his work with pretty basic Facebook ad targeting and the guy he was interviewing with was like, "Whoa, you could do that?" And my friend got off the phone and was like, "Shoshana, you will not believe this." And I remember being like, "All right, that's less," and so when they came out in the news, I'm like, "I have stories."

Justin Hendrix:

I want to ask one last question of you all, something that I've been thinking about a little bit just in the furor of the last couple of days with the hearing in house oversight, the select committee on the weaponization of the federal government, et cetera, almost a little more concerning to me that if the FBI does call Twitter or perhaps the CIA does call Twitter at this point, is there anyone even there to answer the phone if there were a legitimate national security concern? Should we be considering the counterfactual here?

Darren Linvill:

I actually had this issue myself recently. Several weeks ago I had identified an inauthentic network of Russian accounts speaking in Russian language. These weren't engaging in English politics, but they were talking about the war in Ukraine and they were very pro-Putin. And it took me more than a week to find an actual human at Twitter that I could speak to shut down those accounts using every connection I had. It took me a week to get ahold of somebody, and I can only imagine that the government would have similar problems.

Shoshana Weissmann:

Recently ... I live a very exciting life and I had a very prominent Nazi target me, and he had apparently sent makeup to my office. I know the sounds fake. This is just apparently things that occur in my life. It's not even one that I'd really made fun of much or one that I talked to. But I tweeted about it and I kind of regretted later because while I think it's important to expose anti-Semitism, which is what his letter was about. I just kind of didn't need all the hell, but I got all the anti-Semites coming after me, so I was reporting them as usual.

And Twitter often does ... they won't catch everything I send them, they won't remove it, but this time it was ... they were handling very little of it. They're like, "We reviewed our policies and this was okay," and it was really well over the line stuff. So I stopped reporting it at a point because I'm like, "All right, well, I'm just not going to waste my time. I'll just block them all. That's fine." But it kind of speaks to that. I'm not saying Twitter endorses any of it, but I do think they might not have the resources to really go through it that they once had, which sucks if you don't like Nazis.

Mike Masnick:

Yeah, I think that the company is struggling in general with handling a whole bunch of different things that it's trying to do, but what it comes down to is sort of the nature of new Twitter, which is that it depends on what Elon wants to do. So if he discovers something that he finds is problematic, then he'll be open to talking to the FBI about it or the CIA about it. He said a few times before ... it's kind of funny because he's always like, "Oh, I think most of the FBI is good. I think they're really trying to do good. I believe in what they're trying to do," except when he misinterprets what they actually did with Twitter. Then suddenly it's criminal and the largest constitutional violation ever, or something along those lines.

So I think that if something is the kind of thing that will get his attention, then they'll talk to him and he'll probably do whatever it is that they want him to do much more readily than maybe the old Twitter might have done. But yeah, if it's something that he is not interested in and doesn't think is important, then I think that it sounds like yes, they will have trouble reaching anyone at the company.

And we've already heard that ... there was just a report recently from the New York Times looking at how the company is dealing with child sexual abuse material, and they spoke with officials in Canada and Australia, and the Australian official in particular was saying that she has been reaching out to people at Twitter. Everyone she knew was gone. And she's trying to discuss with them how they're handling child sexual abuse material, which is obviously extraordinarily serious and a very important issue, and she's getting no response from the company.

And that's an issue that Musk has said is priority number one. So it really depends on how serious he is about it. And that's a very not great way for a company to deal with these things. Most other companies, even companies much smaller than Twitter, they have a team that is designed to deal with these kinds of issues as they arise and they have a process. And I don't think Twitter has that anymore. It's entirely just one man's whims.

Darren Linvill:

They didn't have it before. I mean, that is definitely one thing that the Twitter files has shown us. There was never a button that the government had pressed to get in to get in contact with the right person at Twitter. There were no processes. It was all about personal relationships.

Mike Masnick:

I would push back on that a little bit because I think that it did show that there was some process and there were some setups in order to do that.

Darren Linvill:

But it was about people at the end of the day.

Mike Masnick:

Well, to some extent, it's always going to be about people at the end of the day, but ...

Darren Linvill:

All those people are gone.

Mike Masnick:

So all those people are gone. But Twitter did have a process, and they did have a system. And what I've heard from other people, including people who have worked at Twitter in the past on this stuff, and people who've worked at other companies like Google, is that often they actually did have a very clear process when it came to government requests because they were so concerned about it, not that they would have a separate process for a request that came in from the government because they wanted to make sure that it wasn't crossing the line of coercion and that they would review them, and so the lawyers would get involved rather than maybe just the frontline trust and safety staff.

Darren Linvill:

I think we're talking about two different things. They had an internal process for handling requests, but how those requests were made and the initial engagement, that was all very personal.

Mike Masnick:

Perhaps. I mean, they did have a portal system. So I don't know. We're getting into areas that I'm not as familiar with. So ...

Justin Hendrix:

We didn't even talk about the portal, but of course there was on Shoshana's great event, there was a little discussion about how would you handle it differently if a senator sends a request in through the portal that is perhaps related to a threat against a member of their own family or against themselves. Would that end up being something that would need to be disclosed? There's so much complexity here. We've run out of time. In the Musk or MAGA cinematic universe, I'm sure we'll continue to see quite a lot of hyperbole on these issues. But I appreciate you all for having a more reality-based conversation and evidence-based conversation today. So thank you very much.

Mike Masnick:

Thanks.

Darren Linvill:

Thank you everyone. That was fun.

Shoshana Weissmann:

Thank you. Yeah, I really liked this.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics