Home

Donate
Podcast

Governing Babel: John Wihbey on Platforms, Power, and the Future of Free Expression

Justin Hendrix / Oct 5, 2025

Audio of this conversation is available via your favorite podcast service.

Drawn from the biblical story in the book of Genesis, “Babel” has come to stand for the challenge of communication across linguistic, cultural, and ideological divides—the confusion and fragmentation that arise when we no longer share a common tongue or understanding.

Today’s guest John Wihbey, an associate professor of media Innovation at Northeastern University and the author of a new book titled Governing Babel: The Debate Over Social Media Platforms and Free Speech—And What Comes Next that tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus rather than division and violence. The book is out October 7 from MIT Press.

Governing Babel: The Debate over Social Media Platforms and Free Speech—and What Comes Next, by John P. Wihbey. MIT Press, October 2025.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Good morning. I'm Justin Hendrix, editor of Tech Policy Press, a nonprofit media venture intended to promote new ideas, debate, and discussion at the intersection of technology and democracy.

Drawn from the biblical story in the Book of Genesis, "Babel has come to stand for the challenge of communication across linguistic, cultural, and ideological divides, the confusion and fragmentation that arise when we no longer share a common understanding." Today's guest is the author of a new book that invokes Babel and tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus, rather than division and violence.

John Wihbey:

I'm John Wihbey, associate professor of media innovation at Northeastern University. I'm the author of a new book, Governing Babel: The Debate of Social Media Platforms and Free Speech—and What Comes Next.

Justin Hendrix:

John, I'm excited to speak to you about this book, which is just out from MIT Press. Going to talk through some of the themes in it, and also hopefully connect it to your research and some of the events that have transpired, I'm sure since you put this book into the publisher.

But I want to just start with your intent here. In your introduction, you start with postcards from January 2021. Of course, for US listeners, when they think of January 2021, I'm sure their minds immediately go to January 6, 2021. You, of course, start there. You talk about January 6, 2021 and the unique role that social media and the current information environment appeared to play in the events of that day. But you point to other events that may be less familiar to the listener. Can you just talk about why January 2021 was your window into this book?

John Wihbey:

In my initial research around the contemporary information environment and questions of content moderation really dove into the January 6th events. I drew on a lot of documents and really secondary sources that other people had. But one of the things I noticed when I started trying to put it in context was that it really appeared that the month that the third decade of the 21st Century started, there was all kind of warning signs around the world. They weren't politically connected, but they all had this common theme of the online environment and the particular situation I think presented by social media platforms.

In Russia, there were huge protests around Alexei Navalny, actually was really the catalyst for the events that put him in prison and ultimately led to his death. In India, there was the farmers protests that led to a lot of pushback from the Indian government and Prime Minister Modi. In Uganda, there were protests against the government where social media platforms were shut down by the government. In a span of just a few weeks, there were some really major world events that all had this strong social media dimension to them.

I felt like opening the book, which is really about the Babel of confusion that is the world's speech and communications environment, I felt like we needed to paint that global picture and show how different societies with different values are dealing with common challenges.

Justin Hendrix:

This podcast, we've come back again and again, including with your help, you've been on this podcast before, to questions around the intersection of information, social media platforms, and society. Certainly, the intersection between tech and democracy as a primary concern.

I'm struck by another thing in the introduction here. You say your goal is to "build towards a response principle." You call it "a duty for all those running networked online platforms to take reasonable action when potential harms present themselves." You talk about this collective responsibility we have as technologists, and certainly as the people who are running these platforms. I found myself reading that and thinking, "Yes, absolutely." And yet, it seems we're so far from such a principle at all being embraced by most of the oligarchs who are leading the big platforms that billions of people participate in at the moment.

I don't know. Where do you start in your mind? What's the crisis level?

John Wihbey:

Well, I think the crisis is really quite acute. I think there is some underlying economic and political factors that are also driving the polarization, the inequality that exists. But I think the online communications environment I think is amplifying and exacerbating a lot of these underlying trends. I think we're in quite an acute moment of crisis.

But the book tries to point out that we've been in crisis moments that at least are echoes of this moment. In particular, moments where questions of communications policy and regulation were quite unclear. You think of 100 years ago, with the advent of broadcast technologies. That's what the book tries to do, is to frame our particular crisis moment as part of a series of historical cycles that, while many I think are pessimistic right now, if we think of this as part of a longer trajectory, and I call it a 100-year journey, I do think there is going to be opportunity. I think it's our job as scholars, technologists, people doing work in civil society or working with the platforms or for the platforms to try to think about where are we going. Where do we want to actually aspire to? And to prepare ourselves for that moment when the opportunity, the policymaking moment may present itself.

Look, it's the winter of our discontent for those in the trust and safety community and the international human rights law community. I'm not naïve. But I do think if we take a historical view, a longer view, that can present I think some bit of optimism, but it can also remind us that we have an obligation to start thinking deeply about what we would want to ultimately aspire to.

Justin Hendrix:

You take on red herrings like the marketplace of ideas, which other scholars have taken apart and shown the various weaknesses of that metaphor in trying to describe what's actually going on on social media platforms at the moment. You take us through a history of disinformation across mass media and beyond. I don't want to rehash that entire history, but as far as thinking about what is most important to you when you frame up your ideas, where it's come to at the moment, you're hitting on Lippmann, and Dewey, and John Stewart Mill, and other people. What's most important for the listener to understand about that past and where we've got to?

John Wihbey:

I think one of the most important things to recognize is that we've lost some visibility into core values and issues, in my view. I think by recovering some of the dialogues and debates in the past, we can begin to discern what in particular American culture could support and tolerate in terms of new rules. I think that's the most important thing.

Because right now, we're in a state of real polarization and confusion. To me, the only way out is to look very deeply to the first principles that have informed the country's past communications regulation, some of the past ideas that I think were lost in history. But if we resurface them, I think we can come to something that we might agree on. Again, I don't mean to sound Pollyanna-ish, but I really believe that the first step is to get some sort of structured debate, some sort of structured forum, perhaps a regulatory forum in place. I don't think we're ready for rulemaking. I wouldn't pretend that you could get a bunch of Democrats and Republicans into a regulatory agency with commissioners right now and have them agree on almost anything relating to speech in the online environment.

I do think if we spent a few years putting things on the record, gathering evidence, creating access to some data, making the companies show up regularly, we might move from some of the grandstanding that we've seen where the CEOs appear before Congress from time-to-time and it's a bunch of theatrics and spectacle. If we moved it to a more structured environment where we were gathering evidence slowly, looking at it, trying to get definitional clarity, thinking about the categories that we might want to gather data on, I think that could be a first step. It's not dissimilar to what we did with the FCC coming out of the '20s into the '30s. In 1934, the FCC was founded. The FCC was a mess for many decades in terms of what are the real rules for licensees, for people who were doing broadcasting in the public sphere.

I'd like to get to that moment where we at least have some general principles. That's why I try to articulate this response principle, different than the public interest principle that the FCC had opted as its core value. And to try to start seeing if we can structure the dialogue and structure the evidence a little more carefully.

Justin Hendrix:

One of the things that's perhaps most dispiriting about even thinking about that is the current state of regulatory agencies in the United States. Certainly, it's hard to imagine the Federal Communications Commission, which is busy serving as a censorial arm of the Trump administration, participating in a reasonable debate about these matters. It's almost like, for you, you have to imagine a future state of our politics where folks might be willing to engage in reasonable discussion?

John Wihbey:

I think that's right. Here, I'm drawing on the work of Tom Wheeler, who was the chair of the FCC under President Obama, who has some ideas about a new digital regulatory agency. Others have put forward ideas for digital more platform regulator. There may be different ways to do this. I'm not a policy scholar proper, so the book is nonfiction narrative and really does a lot of storytelling, but it does have argument embedded in it.

You're absolutely right that it hardly seems the moment to reconstitute some kind of new FCC which would inevitably be called the Big Brother Operation and maybe be dismissed out of hand. But I do wonder as the laissez-faire speech consensus has fallen apart, and you see it with some serious reservations about what the FCC is doing, particularly with regard to the Jimmy Kimmel jawboning, but in a lot of other respects. You start to see people on all sides of the aisle say, "Wait a minute, maybe there should be some rules and other principles that should govern this space."

After the Charlie Kirk assassination, we began to see people in the conservative camp start to talk about what should appear or should not appear on social media, under what conditions, with what functionality associated with it on different platforms. I actually think that's a good thing for people to start articulating views on where lines are. I think that's actually a beginning of a conversation.

It's worth pointing out that the traditional civil libertarians on the left, the free speech maximalists on the right have started to change positions around the issues of hate speech, around issues of disinformation, around issues of expression generally. The lines are getting really blurry, which to me says that there may be an opportunity in the coming years to try to figure out whether we could re-articulate a set of rules about large online communication spaces.

Justin Hendrix:

Let me ask you to characterize something that is a big part of the middle book, couple or three chapters here, thinking about the First Amendment in the United States, thinking about the notion of free expression, and human rights as they apply under different international frameworks. I don't know. What's your assessment of where we've got to? We've had various other conversations on this podcast with First Amendment scholars in the past. We've had on people—like Jameel Jaffer, we've had on Mary Anne Franks—with various differing views on exactly where we're at. But what's your assessment based on this book, based on your effort here of where we've got to on those conversations?

John Wihbey:

The folks you mentioned are luminaries in the legal First Amendment scholarship and jurisprudence. In some ways, I'm treading into territory that's beyond my expertise as a scholar. But as a writer and as someone who's trying to tell a deeper story, one of the things I wanted to try to really investigate is this suggestion that the First Amendment and international human rights law in particular are somehow incompatible. What I try to excavate is the ways in which in particular the efforts of Eleanor Roosevelt, but others really took the American First Amendment and made it global. That informed the effort to create some kind of agreed upon set of international laws, particularly relating to human rights and expression.

The book tries to remind folks that this was a very ... It was a global project to create, for example, the international covenant on political civil rights, other landmark pieces of law. It was a deeply American project though, in many respects. Although recent administrations in the US have balked I think at fully embracing a lot of this, it is in some ways deep in the DNA of the United States to try to think of the First Amendment as a global principle that can help protect dissidents, can ensure free expression, can help human rights activists. Can ultimately speak to the protection and the flourishing of democracy. I think we've lost that. There's a real pessimism about the ability of international law to really mean anything. In some ways, I'm trying to tap into some memory of what the US did do and by such could do.

Justin Hendrix:

You focus in particular on incitement. You have some recommendations for social media platforms in particular and what they should do about the problem of incitement. Can we just touch on that one for a bit? Because going back to your entry point in January 2021, that seems to be one of the most I guess high level concerns that everyone has around ... We're seeing it across the globe. I'm even thinking about the recent events in the UK where arguably disinformation, including claims spread by Elon Musk, led to violence. We've seen many other examples of this in past. How do you think social media platforms should think about incitement in the current environment?

John Wihbey:

Well, it's a really difficult question in terms of getting really great definitional clarity, and then the platforms being able to operationalize it. But I would say if someone is live-streaming effectively a domestic terrorist act, as happened on January 6th, but as we've seen around the world in different cases, I think a reasonable, whether in the United States or around the world, would agree that the companies have some obligation to prevent that type of action from occurring on their platform. And when it's discovered, to have sufficient capacity to do so.

One of the things that came out of the January 6th congressional investigations was that, for example, the Twitter platform, now X, really just didn't have the capacity to control the online environment. There just weren't enough firefighters, there weren't hydrants. This is from some of the unpublished, but revealed reports that folks like Dean Jackson were involved in authoring. The platform just didn't have enough people and didn't have the technical capacity to chase this stuff down. Even stuff that's clearly violating federal law and is dangerous.

It seems to me that requiring platforms to have enough personnel and enough technical capacity to be able to fight against incidents that involve terrorism, civil conflict, other kinds of really dangerous environments and situations, it seems like a reasonable thing. Maybe some kind of preparation standard. Certainly, I think that would be a first step in terms of requiring some kind of protection against incitement.

But the incitement chapter also looks at a few other possible ideas. One is this idea around over actions and actors. I think helping the platforms to see part of their duty of care is doing more than just looking at messages, but also trying to discern what's going on outside the platform. If there's a group that is clearly trying to foment violence, having a duty to actually do some additional work. That may mean phone calls, it may mean discovery, it may mean calling law enforcement. But getting some context I think is also an obligation. You'd have to structure that really carefully, but I will say platforms have contact with law enforcement every single day. There are hundreds of contacts.

This is really tricky territory because it gets into this collaboration that I think has become quite controversial. But one of the recommendations in the book is that if you got a new regulatory forum or agency, you could channel all of that government contact into a transparent environment where it's actually recorded. There's some oversight, there's independent oversight. Right now, what we have is a pretty willy-nilly hodgepodge of relationships between government, job owning, and all these different considerations. There's no one place to put all these communications where there's some oversight.

Then the final thing I note in the incitement chapter is just the issue of encryption, which I think is really bedeviling a lot of folks. Including the platforms, but also law enforcement and civil society groups. Because so much of the communication, particular illicit communication where violence might be advocated, where illegal conduct might be instigated is done on end-to-end encrypted platforms. There's a huge question of what do you do with the What's Apps and Telegrams and Signals of the world? Certainly, these are incredibly important values to have privacy protection and we wouldn't want to violate that. But there are some new ideas about, for example, message franking, where there would be a way of at least validating that some kind of bad activity is going on if someone within that communications channel wanted to report it out as a potential problem. Right now, you can't verify the authenticity of any message because you can't get it out of that particular communications system.

I think there are some creative ideas. The privacy maximalists, the encryption folks who are very scared about breaking encryption in any way, shape, or form are having to rethink some of the stuff that the computer scientists are coming up with. Here, I'm just reporting on certain existing debates, I don't have expertise in it, but I think it's quite interesting.

Justin Hendrix:

I think during the period of time that the court obviously was arguing about and civil society was arguing about the facts of the case in Murthy v. Missouri, there was a desire, I remember myself even saying, "There needs to be a reasonable conversation about the relationship between social media platforms and government, and the relationship between certainly law enforcement and social media platforms."

But we're now in this phase where we've got this hodgepodge of third party vendors often operating in this membrane, somewhere between the state social media law enforcement. It seems to really complicate things. I don't know exactly where we'll get to. I guess the other complication is in some of these cases, it would take a government which wants to stop some of this activity, rather than in some cases turn a blind eye or perhaps even encourage it. In the United States, that seems to be an open question.

John Wihbey:

Right. We're just not in a political position right now to imagine a quick solution. But I will say, as many problems as we have in the United States, if you look across the world at what is happening in terms of governments meaningfully intervening with social media and online platforms to try to suppress dissent, to try to silence dissidents, and human rights activists, and journalists, it seems to me important that the United States creates some kind of interface for how this would work. How should government and communications platforms interface? I think that's a really big question.

Sometimes it's very low level, a local cop needs some access to some Facebook data because there's been a crime. Then there's the high level stuff, that the NSA and the CIA and others get involved in that have national security implications. It seems to me that if the United States could provide a model for at least transparency, a central clearing house as it were. And say, "Look, this is compatible with best practices and here's how we're going to do it." That we could have a shot at trying to get other countries to replicate that model. So that some of them, I think in some ways ... The job owning that's going on in the United States is obviously super problematic, but there's other stuff going on around the world that is in many respects objectively more problematic. We have to figure out what that interface between government and platforms looks like and to get some kind of transparency and oversight.

Again, don't want to sound naïve, but I do think it would serve the world well if we could figure out what the minimum viable policy solution would be there. And to then capture all of those government contacts in a way that's structured and is subject to review. That could be courts, that could be regulators, or independent watchdogs.

Justin Hendrix:

The major focus on this book is what you talked about last time you were on this podcast and what you've written about for us as well, which is AI and epistemic risk. I feel like this is probably going to be the area that dominates researchers' like yours concern over the next several years, perhaps even beyond. Whereas social media has been the primary concern for the last 15, now we're entering into a different phase.

You introduce all types of terms here that feel like they're important for the listener to think about. Things like context collapse or container collapse. You talk about different ways that artificial intelligence is being used to intervene in speech on social media platforms, or to generate more speech, more content. I don't know. When you step back right now in September 2025, what have we learned, even from this early stage of generative AI and it's presence on the internet?

John Wihbey:

Well, we've learned a lot. I think one of the big lessons really is the result of the proliferation of AI-generated content. If we imagine what could happen in the coming years, we could very well imagine many of the platforms becoming semi-unusable I think. I think about the AI revolution sometimes through the prism of my students, who are 18 to 25, depending on where they are and what kind of degree they're pursuing. What's interesting is that they don't feel like this is their revolution. As opposed to the social media revolution, which was very much youth-driven. This is a revolution that is in some ways being imposed on them.

I think that they see their platforms, their backyard, which is Instagram, and YouTube, and TikTok, and whatnot, being in some ways polluted. I think it's creating a lot of backlash and skepticism. I'm interested in how, as the generation advances and they become people who are working in the professions, how they will see AI. That's one thing. I don't know if that's coherent.

I think the AI slop problem is a big one. The authenticity of content is a real question. The term that I try to frame many of these things around in the book is epistemic risk. A lot of people in Silicon Valley and some of the effective altruism community, others who are doing really good work, often talk about existential risk. This is the killer robots who are the out of control AGI sci-fi visions that people have. I think it's important to think about those things. But I do wonder if really the bigger risk in the short term is epistemic, in so far as the nature of knowledge changes. The ways in which we're able to have any sense of shared reality, a shared sense of facts is diminished even further. Obviously, it's taken a bit hit in the social era generally.

If you go on X for example right now, it is a real hall of mirrors sometimes. I'm one of the people who stayed on it, I'm at least a lurker. Every day, I have a very hard time discerning what is authentic and what is not. Whether it's the voices, or the images, or the videos, or the social data of likes and comments that inform how we see something. A really ungoverned space online with a ton of generative AI content I think can look really dark and really epistemically confusing.

Justin Hendrix:

One of the things that you did as a part of this, or you fed into the book I suppose, is some polling around the world. We've talked mostly in a US context here. I should reassure my European listeners and listeners elsewhere that you do talk about the Digital Services Act, you talk about the European model of regulation, you talk about various other countries' consideration around these issues, particularly around speech. Can you talk a little bit about the polling and how some of those results play into the way you think about these things?

John Wihbey:

Yeah. Our team at Northeastern has survey research touching on different countries around the world, but there's been a lot of subsequent polling and survey research from a lot of other great groups. I was just reviewing a bunch. The couple findings that I think unite both our work and the work of others, whether it's the TUM Group in Europe, whether it's Pew, many other scholars out there doing this work. One is that the US tends to be an outlier in terms of having stronger libertarian First Amendment fundamentalist views on social media. In other words, real hesitancy around content moderation. Although, I would point out that once you start to point to specific interventions, context labeling, down-ranking stuff that's really nasty, a lot of the American public comes around to seeing that as a sensible trade-off.

Around the world, pretty much the finding over and over again is that there is a very strong appetite among publics for some kind of content moderation. Once you get really specific on hate speech and disinformation, there's some variation, but I think the platforms and the policymakers would be well served to look at this and say, "This is strongly supported by publics." They see trade-offs obviously, be free expression and some degree of moderation, but the polling and survey data is pretty consistent.

That said, even within regions, for example like East Asia, and I surface some of this data, I think it's from Pew. Where you would notionally have a lot of maybe cultural overlap, there are quite different sets of views on the trade-off between keeping social stability and harmony and being able to say whatever you want. Even within countries that border one another, you can see a lot of differences.

One of the findings from my research is that if you ask really high level questions, you often get tons of consensus. If you ask more specific questions, you start to see more regional and country variation. The platforms, frankly, have to then operationalize what the rules are. They have to respect national law, they always say that. I would not pretend that it's an easy set of trade-offs and balances to strike in each country, but I think there's broadly consensus around there needing to be some amount of content moderation. The conversation should be where are the thresholds, what are the definitions? How do we keep things roughly under control even while allowing lots of ideas to ventilate and to circulate?

Justin Hendrix:

John, I want to come back to this idea of the response principle. You say early on that the book builds towards it, and by the end we come to this notion of it. You say, "The response principle offers an opportunity to consolidate a new set of expectations, a norm that could be broadly translated into local law. This principle derives from original traditions of media and communications governance thinking, namely First Amendment jurisprudence, the fairness doctrine, the right of reply, the power of counter-speech, various aspects of the law and information justice movement particularly relating to transparency and human rights impact assessments that have been developed over the past century."

I just want to give the listener the basics on this response principle. Maybe, if you will, convince me why it's not essentially an appeal to the owners of these platforms to behave in a more moral fashion?

John Wihbey:

Sure. That's maybe my rhetorical flourish to try to bring it all together. But I do believe that if you look at the whole trajectory of particularly 20th Century history and into the 21st Century, and you look at American jurisprudence, the trends, that's what I'm trying to trace. You see what one observer called a "rule of elementary fair play." Which is that if place are criticized or there are harms that are levied against a certain group of certain set of people, that there should be some kind of response. I think it's very deep actually in American culture, this elementary rule of fair play.

I will note that there was this idea of the right of reply, which got basically adopted around the world. It almost got enshrined in some of the UN's origin doctrines and whatnot, but it didn't quite make it. Actually, it's an interesting ... Again, I'm not a legal scholar, but if you trace its history, it's the thing that gets rejected in the 20-0 decision in the '70s. Which basically says the government cannot basically tell publishers, newspapers in that case, what to do. The case was about whether someone had the right to print a reply in a newspaper.

It became central again in the NetChoice decision recently. It came sailing back because to me it was about rejecting a right of reply. Broadly, we see this response principle at play. In NetChoice, it's interesting because if you read Justice Barrett's concurring decision, she makes a bunch of observations about how would algorithms play into all this? Are certain types of algorithms infrastructural? Are other type of algorithms expression that we should protect?

I see in some ways recurring issues that are from our history that we've gotten lost, for example this right of reply, being resurfaced in new ways and in different ways in an algorithmic era. I'm trying to do a genealogy of ideas in the book. I don't know how successful it is, but I try to bring that forward. I had distinguished this response principle from a public interest principle, which is where the FCC began. I think we're dealing with very different communications capacities and capabilities. I can't imagine a public interest standard being implemented. People have talked about, "Let's get the fairness doctrine back somehow." I just don't see a path for that. A lot of policy scholars, like Phil Napoli, have looked at it and said this is pretty dubious.

But I do think we might be able to, within this risk assessment framework and this response framework where there's a rule of fair play, start to think about shaping a regulatory environment which asks the companies to show some kind of responsiveness and social responsibility. As Tom Wheeler has said, he says in his book Tech Lash, we probably should be effects-based. To the extent that there is regulation, it probably shouldn't tell people exactly what to do. They need capacity for iteration, it needs to be agile. This is a really dynamic environment, it's very different than broadcast. But giving people directional instructions and making it an effects-based regulation I think would be good. I hope the response principle could be a way of being an umbrella idea that could encompass a lot of different elements.

Justin Hendrix:

John, you're speaking to me from Stanford. You're attending the Trust and Safety Research Conference there. This is the first year I haven't attended that conference. Maybe I'm slightly missing some of the folks that I know are there, including many Tech Policy Press contributors and many of the folks that you cite in this book who are presenting. I don't know, what's your lay of the land? How does it feel in that community this year? A lot has changed since last year, not to mention the administration, the politics of everything, but certainly even the I suppose emphasis that the tech leaders are putting on the space. What's your make of it? Maybe I'll even challenge you. How does it tie into the book?

John Wihbey:

Well, Justin, many tears have been shed because of your absence, I should say that first of all. Actually, some of your colleagues are obviously here. It's always a wonderful event to see people who are working in common questions and ideas. Industry typically is here, they're here this year. The AI companies are here in some force, and so is Google. Some of the other companies didn't send as many representatives this year and that might be a political thing, it might be a scheduling thing.

I think the atmosphere is one of great uncertainty because both government and the major platforms I think were quite engaged with this research community for many years and have pulled back. Both because of the Trump administration's policies, but also the pivots from many of the major companies. I think there's more trepidation around all these issues of trust and safety which have been politicized. There's a sense that we're in a winter cycle I think in trust and safety, and in international human rights law, and other areas which are all relevant.

Nevertheless, there's a great amount of interest in thinking about how generative AI is going to intersect with the communications platforms. I think there's a lot of energy around just trying to study this new variable in the equation. I think a lot of people are also starting to think about how they could help states, US states formulate their policies. Whether it's AI, and you there has been more than 1000 bills or something like that that touch on AI, but also trust and safety regulations. Whether it's relating to youth harms, whether it's relating to other kinds of scams or other kinds of harms that may occur on platforms, people are really engaged I think at the state level.

And also, our colleagues in Europe. You mentioned the Digital Services Act, the Digital Markets Act. The Online Safety Act in the UK. There's a lot of regulatory action just across the pond. Representatives from Ofcom, which is the British communications regulator, are here. Other folks from across the world who are doing national level regulation and who are looking for ideas, allies are in the mix here.

I think in some ways, the T&S community is looking both more locally within the United States, but then also internationally because a lot of the policy action ... As I say in the book, the US is sitting on the sidelines now at the national level. There's really nothing going on at the federal level to speak of and there doesn't seem to be any prospects for meaningful legislation to be passed in the next few years. But there's plenty of action, both down at the grassroots level, and then also across the ocean.

Justin Hendrix:

I appreciate that dispatch. I can imagine there's terribly many tears being shed, but of course I'll shed one for missing the opportunity, and also, for visiting my favorite restaurant in Palo Alto, Zareen's, which is excellent Pakistani food, recommend it to any listeners.

But, John, thank you for taking the time to speak to me from the conference. And commend my listeners to the book, Governing Babel: The Debate of Social Media Platforms and Free Speechand What Comes Next, out from MIT Press. Thank you so much.

John Wihbey:

Thanks, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

AI and Epistemic Risk: A Coming Crisis?June 10, 2024
Analysis
How AI-Driven Search May Reshape Democracy, Economics, and Human AgencyAugust 11, 2025
Perspective
Tech Power and the Crisis of DemocracyJune 3, 2025

Topics