Reconciling Social Media & Democracy: Fukuyama, Keller, Maréchal & Reisman
Justin Hendrix / Oct 12, 2021On October 7th, Tech Policy Press hosted a mini-conference, Reconciling Social Media and Democracy.
While various solutions to problems at the intersection of social media and democracy are under consideration, from regulation to antitrust action, some experts are enthusiastic about the opportunity to create a new social media ecosystem that relies less on centrally managed platforms like Facebook and more on decentralized, interoperable services and components. The first discussion at the event took on the notion of 'middleware' for content moderation, and featured:
Tech Policy Press editor Justin Hendrix opened the discussion and introduced Dr. Fukuyama, before turning the moderation over to Reisman. Below is a rough transcript of the discussion.
Justin Hendrix:
This is one of the most important issues of the moment: how we arrive at consensus, how we sort out the problems of this information ecosystem-- that is a key problem that we're all facing together. But on a more basic level, the conversation we're having today is really about advancing a conversation that's been taking place among some of these panelists. It was first prompted by a working group at Stanford that Professor Fukuyama was part of, and then a handful of entries that followed that in the Journal of Democracy, including some initial provocations from Professor Fukuyama and then responses from the others.
We're going to hear some great ideas today. We're going to hear some debate. These people don't all necessarily agree with one another, but it'll be a civil discussion, I'm sure, because everybody here has the same common interest, which is to try to make a better internet and make the internet safe for democracy.
Professor Fukuyama is a senior fellow at Stanford University's Freeman Spogli Institute for International Studies, Director of the FSI Center on Democracy, Development, and Rule of Law, and Director of Stanford's Ford Dorsey Masters in International Policy.
In his essay this spring in the Journal of Democracy titled “Making the Internet Safe for Democracy,” he wrote, "There is nonetheless a great deal of confusion as to where the real threat to democracy lies. This confusion begins with the question of causality. Do the platforms simply reflect existing political and social conflicts, or are they actually the cause of such conflicts? The answer to that question will, in turn, be key to finding the appropriate remedies." He turned to the platforms, and he said, "No democracy can rely on the good intentions of particular power holders. Numerous strands of modern democratic theory uphold the idea that political institutions need to check and limit arbitrary power regardless of who wields it."
This week, we got a taste of what goes on inside Facebook with the revelations brought forward by whistleblower Frances Haugen. I'd urge people to take a look at the whistleblower disclosures that were filed with the SEC-- the names and excerpted contents of some of the studies. Really, a company operating on a vast scale with little oversight or transparency engaged in what could- could, and I'll be provocative here- be summarized as unregulated social engineering.
We're going to talk about how and whether we should take apart the pieces. There are many people who believe a decentralized social media ecosystem that reduces the power of these platforms may be the solution. Some of you listening are proponents of that. Some of you, skeptics. Some are here to hear these ideas and decide which side you might be on. But I think everyone that's part of this discussion today, I should hope at least, is here for the same reason-- which is to strengthen democracy and to hopefully guard the liberties that this particular form of governance affords us.
So, we certainly have the right people for this discussion today. This segment is going to be led by Richard (Dick) Reisman, who is a frequent contributor to Tech Policy Press and writes about these issues on his own blog-- so I want to welcome him. And then I want to introduce our first discussant, who is going to lead us through some of the key thoughts and framing thoughts from his pieces in the Journal of Democracy, both “Making the Internet Safe for Democracy” and “Solving for a Moving Target.”
So, Professor Fukuyama, thank you for joining us.
Francis Fukuyama:
So, thanks to Tech Policy Press for hosting this discussion. Thanks to Daphne and Nathalie for the comments that they've made already. I think that we're going to have a rich discussion about this. So, let me just give a little background on the working group on platform scale at Stanford, which is part of a larger project on democracy and the internet that we've been running for the last couple of years.
This working group actually started out as a Stanford working group on antitrust related to the digital platforms, but as we started looking carefully at this set of issues, we decided that antitrust was really not the right lens with which to address what we saw to be the biggest problem that these big platforms-- and by the way, by these big platforms, there's basically only three... it's Google, Facebook and Twitter, we're not talking about any other companies-- but the real problem that they posed was one of political power.
Antitrust law, as it's developed in the United States and honestly, in Europe as well, is really focused on economic harms, exclusionary conduct or anti-competitive behavior that creates harms to consumers in terms of the products they see and as we see now, harms to privacy and the like, but they don't really address what I think many people have regarded as the central problem, which is the fact that these platforms basically are the main channels today for political speech. They've displaced the television networks, the legacy media as the primary vehicles by which people communicate about political issues. In that respect, they're extraordinarily powerful, and they have a scale that rivals or possibly exceeds that of the three broadcast networks, over-the-air networks back in the 1950s or '60s. It's really that power that is at the center, and it was our feeling as we thought about it that antitrust law really does not address the major harms that those platforms produce.
So, if you think about what the harms are, I mentioned the economic ones. There are social and privacy harms, because the business model of a company like Facebook is basically to grab as much of your personal data as possible and then to milk every penny of revenue out of it that they can, but the political harms, I think, are the ones that have been of greatest concern especially since the 2016 election. Those really have to do with the platforms' tremendous ability to disseminate misinformation, conspiracy theories, uncivil abuse and the like that many people have linked to both to the polarization that I think is probably the single biggest political challenge to American democracy right now but also to a general deterioration of deliberation and civil democratic discourse. That's directly related to the business model of the platforms, who do not have a responsibility for improving the quality of democracy. They have a responsibility to their shareholders to maximize profits. That's all often related to their ability to accelerate information that is salacious and clickable but not true and not in line with the kind of deliberative mode that you would want in a democratic society.
So, that was the basic problem. It was a question of power. In designing a response, it seems to me, you want an institution. You don't want to simply get Mark Zuckerberg to agree that, yes, we're going to keep anti-vax information off of Facebook because what you want is an institutional solution. It shouldn't depend on the fact that Mark Zuckerberg or Jack Dorsey happens to be running these companies, and they are kind of aligned with your social goals, and they're willing to do the sorts of things that they're pressured to do by activists, because one of the things you have to think about in the future is if you've got a platform with this much power, what if it's run by a Rupert Murdoch at some future point that's going to use that platform power for very different political ends?
So, you want a solution that's kind of neutral with regard to the actual owners and content moderators that are currently in power. You want to essentially try to reduce the power of those people. We believe, as a normative matter, that a private corporation should not have this kind of authority over political speech, and that's for two reasons. One is just a normative reason that they are not built to be dedicated to the protection of democracy. They are devoted to their own economic self-interest. Secondly, it's not clear that they've got the capacity to make the kinds of complex nuanced political decisions to determine what's fake news, what's acceptable political speech, what is not.
They do perform this function in other areas. They keep things off of the platforms having to do with child pornography, terrorist, incitement, and so forth that are relatively non-controversial. I think everybody should be grateful that they do this. But when it comes to political speech, I think most people can think of instances in which their judgment has been very questionable. Certainly, given the polarization in this country, there's probably half of Americans that think that they've been doing a terrible job at this. So, the solution really has to be one that's kind of neutral with regard to the owners and the actual people running these platforms. It really has to be an institutional solution that somehow reduces their power.
Now, the solution that we've come up with, we've labeled middleware. Basically, in a nutshell, what this involves is the effort to outsource political content moderation away from the platforms to a layer of competitive companies that would tailor the moderation to the desires of the actual users of the platforms because among other problems right now, the platforms moderate content based on algorithms that are completely non-transparent, and they're not user definable. They are trying to intuit from your browsing behavior what you would like to see, but you can't tell them that you don't want to see a certain kind of content. So, the idea is to transfer the ability to, in effect, filter or moderate your feed on the platform to another company that you could choose voluntarily or multiple companies that would give you the kind of content you want.
The reason that we got to this solution is that in our view, the competing ideas for how to solve this problem don't really work. I mean, they may be good ideas at some level, but as a practical policy matter, we didn't think that they would be appropriate. One option is the one I started with, which is to use existing antitrust laws to either break up the platforms, which would certainly reduce their power or to use antitrust as a way of limiting their ability to behave in this manner. We don't feel that given the way those laws are written right now, that's going to work. I don't think you can break up Facebook because I think the economies, network economies will mean that one of the successor baby Facebooks will soon occupy the position that Facebook holds right now. In any event, it's going to take 10 years, and we don't have 10 years to do a kind of AT&T style breakup. So, that alternative, I think, is not going to work.
A second idea that's out there is to mandate some form of data portability or interoperability among the different online platforms so that people would be able to leave, potentially leave Facebook and move somewhere else. This idea, I think, sounds good in the abstract, but I think as a practical matter, it's really not workable. That has to do just with the kind of technical issue that platforms are very heterogeneous. The most important data that they possess is not the data that you've given them about your email address and your phone number and your address and your credit card. It's actually the metadata that you produce by interacting with the platform. There's some question as to actually who owns that data, but if you think about how do you translate a like on Twitter or Pinterest into a Facebook like and how do you transfer the fact that you liked a certain speaker or something from one platform to another, given the heterogeneity of the data that they make use of, it's really not clear how you would make that interoperable.
I would just point to medical data where for the last 20 years, they've been trying to come up with portable medical records. After a lot of investment and effort, they've not been able to do this.
A third option is to enhance privacy protections. Daphne can say a lot more about this, but under Europe's GDPR, in theory, you're not allowed to take data that you've gathered from, let's say, selling books and then use that to sell diapers the way that Amazon has done. That's a possible approach. We don't have privacy laws, comparable privacy laws except at a state level in the United States, but again, there's problems with that because in a certain way, that limitation might actually just lock in the advantage of the existing incumbents who are already sitting on a mountain of data and would primarily affect newer platforms that were trying to compete with them.
So, this is what's led us to this middleware idea, the idea that you would have companies, smaller companies that would compete with each other for the service of filtering content according to the wishes of individual users. So, for example, you could imagine a coalition of universities in the United States funding and backing an NGO that, in effect, would certify the academic credibility of websites and then mandate that students and faculty using their servers use that middleware provider to give students some idea of what websites are more academically credible than others.
You could extend this to non-political speech. You could do it for searches. I mean, at the moment, if you search something on Google, you get a listing where the hierarchy is determined by this hidden algorithm. You might want to be able to search for. let's say products on Amazon where they were made in the United States or they're eco-friendly or a whole bunch of other criteria that you could define rather than having the algorithm, the platform algorithm do it itself.
Now, the way that this would work, we actually are working at the moment on a prototype of this. We have a prototype that's actually an extension to a Chrome browser, and we hope to be able to roll this out further. There's a number of problems with the workability of this middleware idea that really need to be solved, I think, before this becomes something that could actually be done in practice. So, one of the problems really has to do with the backend of, let's say a browser extension because you couldn't possibly filter the millions of bits of information that come across the platform day to day without using a lot of artificial intelligence. The platforms already do this as I said, for a lot of the content, they filter out the child pornography and that sort of thing, they would have to continue to do this. And a middleware program would have to ride on top of the material that they've already filtered, but it couldn't be done manually. You would have to face a considerable technological challenge of how to filter that information. It is being done already. There's a company called NewsGuard that rates news sources for their credibility. You can buy it as a browser extension and they're working with Microsoft in their search engine to rate new sites. But it is a substantial technological challenge.
The second big challenge is a business model one. In a way if this was such a desirable service that people would want to have when they use the platforms, you'd have to ask the question, why does it not exist already? And the reason is that there's not an economic incentive for someone to step up. I mean, NewsGuard tries to be a middleware provider. It's not clear whether it's going to be economically viable over the long run. And in order to incentivize companies to provide these kinds of services, there has to be a different revenue model, and that might require regulation. It might require Congress mandating that the platform shares certain amount of their advertising revenue for this. Certainly, they'd have to open up their APIs to the point that the middleware companies could actually ride on top of them and so forth. And so, that's a second issue.
A third one I'll let Daphne talk about, because she's raised this particularly with regard to Facebook, has to do with whether this is compatible with existing privacy law. Let me conclude just by pointing to the single biggest objection that has been raised to our idea. And that has to do with the fact that middleware may actually not get rid of conspiracy theories, hate speech, other kinds of toxic content, and may actually reinforce it because, in our view, there would be nothing that would prevent a middleware company from being the MAGA (Make America Great Again) middleware provider that would amplify everything that Donald Trump says in the like, and that's right. You could certainly, I mean, I think you would inevitably expect that this kind of provider would arise if you actually made middleware economically viable. So, it will reinforce the filter bubbles and compartmentalization that is a big problem on the internet right now.
However, our feeling is that it cannot be the object of public policy to get rid of this kind of speech. If you take the American First Amendment seriously, it is constitutional to say things that are false. It is constitutionally protected to be able to say things that are uncivil, hurtful and the like. And it's not really the job of public policy to eliminate that kind of speech. And unfortunately, given the state of our society, it's just out there right now. What we don't want to see happen is the artificial acceleration of let's say conspiracy theories on a scale that the big platforms are currently capable of. In our view, that's really the target of any kind of effort of this sort is to avoid that kind of acceleration or take downs where you have legitimate political information that is not being shown by the platforms in response to certain kinds of political pressure.
And this is happening for example in India with Facebook, where they've taken down a lot of anti-Modi content in ways that I think do not really accord with a commitment to freedom of political speech. I just want to make it clear at the beginning that this solution does not solve all of the problems on the internet, but I do think it tries to address this big one of power and that's the one that we ought to be focused on, going forward. So with that, let me stop. And I look forward to the discussion.
Justin Hendrix:
Great. So thank you very much, Professor Fukuyama, and I'm going to turn the mic over now to Richard Reisman, an innovator, consultant, investor, author of pieces on Tech Policy Press and his blog that concerned these matters. And Richard's going to emcee the remainder of the session, introduce our next two speakers, Nathalie Maréchal and Daphne Keller, who will variously take on some of the points that Professor Fukuyama has just shared. So, Richard.
Richard Reisman:
I'd like to just go ahead and introduce Nathalie Maréchal from Ranking Digital Rights. And I think Justin, you may have mentioned all of these articles from the Journal of Democracy are available online. So, there's good background on what each of these speakers has to say there, but Nathalie, please go ahead.
Nathalie Maréchal:
Thank you, Dick. And thank you very much to Justin and Bryan for organizing this event and to Francis Fukuyama for kicking off this conversation, and in the Journal of Democracy, as Dick highlighted. So, I think one thing that we all agree on here is that we're dealing with an extraordinarily complex problem. And so, that means that there's no silver bullets, but as Dick put it in his piece, there may be a few silver-cased bullets. In my piece for the Journal of Democracy, I argued that data privacy was one. A lot of people-- though not Francis for reasons he just explained-- argue that antitrust is another one. Personally, I suspect that we're not going to get very far without using at least a couple of-- possibly all-- of these silver-cased tools together.
So, the question here is whether middleware or third-party recommender systems are one of these silver-cased bullets. I am not convinced, but I am open to being persuaded. And whether or not it's a silver-cased bullet, a separate question is whether it's nonetheless a good idea that should be pursued. And on that, I think there's a lot more-- I'm much more convinced that it's a good idea whether or not it's a silver cased-bullet.
Another thing that I think there's a large agreement on, both within this group and more broadly, is that the core problem here as I see it, and as Francis framed it, is the business model. But what do we mean by that exactly? I suspect that many people have slightly differing definitions of that. So, I'd like to start by sharing mine, and my understanding of what I mean when I say that 'it's the business model.'
In my piece for the Journal of Democracy, I highlighted three pillars. The first one is surveillance capitalism, using Shoshana Zuboff’s framework. That's extracting data from human behavior to create corporate value by renting out the ability to influence behavior. Now, this ability may be overblown in some cases, and there's a whole separate but related question about ad fraud. But that's the value proposition that you can use personal data and targeted advertising as well as earned reach-- to use the industry parlance-- to influence human behavior. The second pillar is neoliberal faith in the invisible hand of the market, which includes automated ad exchanges, that free markets are the best way to make decisions in society, and that's how we should be allocating value. And then the third pillar is techno-solutionism. So wanting to use tech, including algorithms based on big data to solve social problems.
And that's what's behind, for example, Facebook's obsession with scale-- that solutions that can not be scaled or not valued and tend to be rejected out of hand within Facebook. And that's something that scholarship and journalism has documented for a long time, and that Francis Haugen’s recent leaks and her congressional testimony really bring the receipts to this argument. That's not new, but we have new evidence to back it up.
So there's a bunch of other elements that I didn't touch on in my piece. Three that I want to mention today. The first-- that again, I really do think is a core part of the business model, particularly at Facebook-- is really bad corporate governance, where the founder and early employees have really out-sized power, which leads to a kind of group think that I think Francis Haugen, again, really testified to this week, and that was revealed in the Facebook Files stories that The Wall Street Journal ran in September.
The second element that I want to mention here is how central courting favor with governments is to this business model. Here again, this is particularly clear at Facebook, but I think we can find examples with other platforms as well. What I'm talking about here with courting favor is when platform content policy is subject to influence by government relations considerations. So again, using Facebook as the example-- and we can talk about why so much of the conversation is really about Facebook, even though we tend to frame it as being about forms at large. We oftentimes, when we talk about platforms, we're really talking about Facebook. The Wall Street Journal, a year or two ago, did some really great reporting on the situation within Facebook India, where Ankhi Das, who was leading the public policy team for India and South Asia at the time was really, really close to the BJP and had herself expressed some hate speech against Muslims on Facebook, and so on. And there's a ton of other examples of this.
And then the third thing that I want to highlight that I think is really important to understanding how Facebook in particular operates as a company is that the corporate identity is completely distinct from the core economic activity. So, we're talking about platforms that make money from ads. But think of themselves as social media platforms with really high-minded missions of connecting the world, but their ability to fulfill that mission is undermined by the economic incentives. And so, that's the case that I made in a piece for Tech Policy Press over the summer, where I argued that Facebook is an ad tech company, and that's how we should regulate it. So, that's some context for how I think about this idea of the business model and what we do about it.
We're here today to talk about the specific proposal of middleware or third-party recommender systems. While I at least think that no solution is going to be enough on its own, we need to examine each potential solution individually, while keeping in mind that the idea is to implement it in conjunction with others.
So, my starting point is that third-party recommender systems are probably not a silver-cased bullet, though I could be wrong-- but is it still a useful thing to try? A number of my colleagues in Europe have pointed out to me since the Journal of Democracy piece came out, that they do think that third-party recommender systems are useful, particularly in Europe because the GDPR already protects privacy. Although, as we all know, GDPR enforcement is not where it needs to be. And that's a really good point.
A few colleagues-- particularly at Panoptykon Foundation-- pointed out that without privacy protection there were a lot of potential pitfalls with the third-party recommender system, but that if you assume that those protections are already in place-- and Daphne spoke to the complex privacy questions that we need to be resolved in her own essay, and hopefully we'll hear some more from her about this in just a few minutes-- I thought that was a really good point that I would bring up here. But anyway, we probably won't know whether it's a good idea, unless we try. So, let's try, right?
But the bigger question for me is whether it's even feasible, and I mentioned that Daphne really hopefully identified four big problem areas that need to be solved. So there's technological feasibility, which is a point on which I have no contributions whatsoever, but nonetheless want to flag that that's a question that needs to be solved. The second is the business model, which is how does everyone get paid? And that's one that I'm particularly interested in talking about. There's curation costs. So, what impact might such systems have on public discourse? And then there's privacy, right? And then I'll add another one, which is how do we make existing platforms go along with this plan? Can we compel them legally? Can we create economic incentives to make them want to?
I think there's a need to figure out how we would get them to open up their APIs at a minimum so that this can work. Somebody pointed out that Twitter used to allow third-party clients to connect to the API. And so, if this was still possible-- it's not anymore-- if this was still possible, these third-party clients could essentially function as middleware providers. The question is, why did Twitter cut off that possibility? What would it take to convince Twitter to bring it back? For a bunch of different reasons, I think Twitter is probably a more fruitful platform to try to experiment with than Facebook, for example, because they're really drawing up the drawbridges-- insert your medieval metaphors as you will.
But another point I want to make before I turn it over to Daphne is that I think to answer all of these questions, one thing that it's helpful to be really, really clear about is whether we're envisioning middleware services as doing content moderation, which I define as deciding what the rules are for content on the platform, identifying content that breaks those rules and then taking action on it. It can be helpful to think of it as a ‘leave up’ or ‘take down’ binary, though there are a lot of intermediary content moderation steps that can be taken to action content that's identified through the content moderation process. So, are we talking about that or are we talking about content curation, recommendation and ranking? That would be the engagement-based algorithms that we've been talking about so much this week in the context of the Haugen testimony. Or are we talking about both being not outsourced, but being delegated to third-party systems?
So, Francis's proposal seems to be, from my reading anyway, about both content moderation and content creation, but there's a lot of variations on this theme. Certainly when colleagues in Europe are talking about it, they use the phrase third-party recommender systems, because they're just talking about the recommender systems. They're not talking about content moderation. Personally, I see far greater problems with a third-party content moderation than with third-party content recommendation. Mainly because content moderation requires tremendous amounts of human labor and really specialized human labor, especially when you think about all the different languages involved, plus all the different contexts involved, plus all the different subject-area expertise involved. Here I think, this might be a situation where there's a natural monopoly where it makes more sense to have fewer, larger entities doing this work, but with tremendous amounts of oversight and accountability.
On the other hand, content recommendation-- and I tend to use recommendation, curation and ranking interchangeably, even though on a technological level, these are different types of algorithms that we're talking about, that they fulfill the same rough function from a user perspective-- this is mostly again based on algorithms. I think it would be easier for there to be different competing recommendation algorithms than for there to be different, competing moderation providers. Of course, there's a ton of human involvement in creating the data that algorithms are then trained on. That's where I wanted to start. So, I'll turn it back over to Dick and to Daphne. I'm really looking forward to the conversation. Thanks.
Richard Reisman:
Thank you. I think let's just move right on to Daphne who's at Stanford.
Daphne Keller:
Sure. I'm Daphne Keller. I run the program on platform regulation at Stanford Cyber Policy Center. And I want to talk about the middleware proposal. I think I have 15 minutes, I'm going to try to use about seven and a half of them in the weeds, and the rest kind of zooming back out the big picture questions.
So, as I outlined in my response to Francis in the Journal of Democracy issue, I have four ‘in the weeds’ questions about how this becomes implementable. Only 'operationalizable' things happen. So, figuring out how to operationalize middleware is of the essence. And my four questions, roughly speaking are: How will the technology work? How will the revenue work? How will the costs of content moderation work? And how will privacy work? Those first two, I don't claim particular expertise in. So, I'll dispose of them quickly. The technology question is, can you build a way for a competitor to remotely via an API take in information from Facebook or from Twitter or from Google's web search corpus, for example, in a way that allows instantaneous, fast, equally useful results?
I don't know. That sounds hard to me, but I've had some engineers tell me that they have hope. The second question is about the revenue, and Nathalie talks about that a fair amount, how do you set this up so that the middleware providers get paid? Does that mean tinkering with the existing very weird multi-party ads infrastructure? How do you make that work? And then the third and fourth, the content moderation costs and privacy, I will dwell on a little bit longer.
So, the part about the costs of content moderation, I think comes down to the fact that there are redundant costs that every middleware provider will encounter, that are a lot like the redundant costs that basically every platform encounters now. For example, if there is a widespread video with a man on a horse singing a song in Kurdish, are they all going to hire a Kurdish translator to translate it, or is there some way to make that system more efficient and share the translation costs? If that video features a flag, that means a lot to Kurds and to Turks, but doesn't mean something to a content moderator sitting in California or the Philippines or India, do they all have to find the experts to research that? Or is there some way to share that information? Or if it’s an American wearing a Hawaiian shirt in 2021 and a content moderator sitting in other parts of the world thinks that that looks benign and it has no cultural resonance for them.
I think the bottom line is many of those redundant costs can't be avoided if we want the different middleware providers to genuinely bring independent judgment and priorities to their evaluation of the content and to their decision about whether you, as one of their subscribers, want to see this, or don't want to see it, or want it promoted or want it demoted. But I also think that there are proposals out there in the ether right now, that would be the beginning of an infrastructure to deal with that. And this is, models for platforms to share information. There are things like Facebook's ThreatExchange program, which I don't know that much about, but I hope someone's researching it. There are things like the GIFCT, the Global Internet Forum to Counter Terrorism, which has the great upside of sharing information and hashes to identify violent extremist content to reduce the costs of that. It has the great downside that nobody in all of civil society or academia or the public generally knows what those hashes represent, knows what images are being blocked, what videos are being blocked, what the error rate is, et cetera. So, the models we have so far for platforms sharing information to make content moderation more efficient have some real problems. On the other hand, if we want middleware to be affordable, if we want the moderator, the competing moderators to have any chance of doing their job and being able to bear the costs then I do think we need to find ways to share those costs.
And to Nathalie's point, I'm trying to think about if I really see a difference between content moderation, meaning responding to an individual piece of content by taking it down or demoting it or something versus ranking algorithms globally that set the whole order of your Facebook feed, for example. I think those two are increasingly fluid these days, so I'm not sure that we even quite can differentiate them in practice. And I would think that again, for a middleware provider to be useful, probably they would want to do both. And unless you're a user who says, "Well, I want to see only the content Facebook permits under its moderation system, but I want it ranked differently." Or you're willing to take one of those two things from Facebook and only the other one from your middleware provider. That brings us to the privacy question. And the question here is not very complicated, but the answers to it, I think are quite complicated. The question is, if I'm on Facebook and I sign up for a middleware provider to moderate what I see differently than Facebook would, or to rank it differently, does the middleware provider get to see the posts shared to me privately by my friends?
Do they get to see my cousin’s breastfeeding photo? Do they get to see my cousin’s assertion that the Earth is flat, or posts spreading COVID disinformation? If they can't see those privately shared things, then they can't do nearly as good a job providing the service we want them too, or be economically competitive with Facebook, which can see all of those things in order to do its own moderation. On the other hand, if they can see all of my friends' posts, my cousin's breastfeeding pictures, et cetera, then my friends have lost control over their sensitive personal data. They're relying on me to make an informed consent about this potentially fly-by-night middleware provider taking their data and being responsible with it. And that's basically exactly the scenario that we had with Cambridge Analytica. It is something that makes people very upset. And so, as a matter of both values and law, including things like Facebook's FTC consent decree and the GDPR in the EU, that's a complicated question.
And I have a more recent blog post that I wrote after that Journal of Democracy piece that tries to just get really nerdy about this and ask, are there technological fixes for this problem? Can we use the blockchain? And usually, I never even used that word, but it has a role, I think maybe. But I do think the bottom line, is at most, the technological fixes can eat away at parts of the privacy problem. And then you can use laws to eat away at other parts of the problem mostly by punishing bad actors after the fact, if you can catch them and they're in your jurisdiction. But the bottom line is ultimately, I think there are trade-offs that will have to be made between these competition and speech goals on the one hand and privacy goals on the other. And I wish that weren't the answer and I hope somebody here has a better one because I would rather have it all.
Okay. So those are my ‘in the weeds things’. I'm going to step back to the big, big picture thing. And something I really appreciated, when Dick wrote a post expanding on the points that I had made, and I said there are four issues- the technology, the revenue, the costs, the privacy- and Dick said, there are five things. And the fifth one is echo chambers, like do we really want this policy outcome of moving people into places where they are potentially opting into the ‘all lies all the time channel’ or ‘the hate channel.’ And I think I didn't spend enough time on this because among other things, I’m a First Amendment lawyer, and I feel like we're looking at legal reform here and the law can't stop people from opting into that any more than it can in their choice of cable or newspaper or books.
But I think we need to engage in it more deeply because it is a really, really big question for a lot of smart thinkers in this area. And I think it's a really big and profound one. Do we want a platform like Facebook, or Twitter, or YouTube, or Google to be really big because that makes online speech more controllable, or don't we? And we might want them to be really big for these very real reasons that build on existing consensus and loss. We might want them to be really big because then they will do a better job than a million little competitors would do of reducing globally, illegal and dangerous speech, like child sexual abuse material. That's one of the bottom lines here. Maybe on the flip side, we want them to be really big and regulable because then lawmakers can somehow force them to protect speech that no government should be able to restrict consistent with human rights. And if we believe that that regulability leads to, on balance, better outcomes, then maybe that's actually what we want.
The other reasons to resist fragmentation of control over content moderation aren't grounded in law or global consensus like that, but they can feel pretty consensual, certainly in the groups where I tend to spend my time-- which is stuff like, well, we want them to reduce electoral disinformation. We want them to reduce COVID disinformation. These are things that in the U.S. at least cannot be prohibited much of it under the First Amendment. We can't use agreed upon constitutional instruments to do this. We cannot use laws enacted by democratically accountable bodies like Congress to do this because of the First Amendment. But these things are rightly a source of massive concern. And when I talked to platform people who just work their hearts out on trust and safety teams, they can't imagine wanting to give that up, wanting to take the important work that they do trying to fight things like disinformation or hate speech and make it less effective by fragmenting the user's experience into a bunch of different middleware providers. I find that really understandable.
On the other hand, if that's our goal in preserving bigness, if our goal is to enable the enforcement of speech restrictions the government could not enforce, that's a pretty big deal. We should acknowledge that we're abandoning how government has worked so far-- both the part about democratic accountability and the part about constitutional or human rights. And if that's what we're doing, we are accepting that these are profit-driven companies, that their choices are likely to be driven by majoritarian pressures, by the priorities of advertisers, their power is not just the power of Mark Zuckerberg or Jack Dorsey. It's potentially that they're a point of leverage for governments or for whoever controls access to lucrative markets ,like China getting Apple to take the NYTimes app out of its app store in China and an app used by protestors out of the app store in Hong Kong. There's a real way in which giant powerful platforms become a lever for someone else acting behind the scenes.
So, having laid out this big, horrible stark dilemma, I think I come down the same place that Francis does and that Corey does. I really like the line in Francis's Stanford report-- which by the way, I didn't actually work on, I did some other work on what I called magic APIs elsewhere-- but I really like the metaphor in there about the loaded gun that someone's going to use, like the Checkov’s gun that once it's introduced, it's going to be used by the end of the play. And I worry about having that choke point and that power, out there because I really believe in the fallibility and the corruptibility of human institutions. Regime change happens. You can look at a lot of countries and a lot of companies and a lot of other institutions to see that. We can't assume that government power or platform power will always be used for the values that any one of us hold dear, they could be used for the opposite values.
And so, building a choke point on human interactions that's explicitly unaccountable to democratically enacted laws or human rights or constitutional rights, and then because we hope or believe it's going to be is benignly, that's an active faith that I have trouble sharing. So, that giant big picture question is why the ‘in the weeds questions’ matter, why it matters if this is implementable and how, and what values, trade-offs are involved in making that implementable. And I'm really excited to have this group, including all of the amazing people who are showing up in the chat here to talk about it.
Richard Reisman:
Great. Thank you. So, we've got about 35 minutes to discuss all of this before we move on to other speakers. I wanted to make some opening comments to integrate some of the views that I've had listening to all of this and set some possible themes and then open it up for whatever debate we want to do, which I think will be lively because of all the viewpoints we have.
These are cross-cutting issues, with the idea that we're at a critical moment in the evolution of democracy, and we need to look at the evolution over time on a fairly long horizon-- next 10, 20 years-- not just what we're doing for the next year or two, because we're at a critical point and we're basically digitizing our traditional ecosystems for human discourse, how we mediate consent socially. We've got this dialectic we're talking about between centralized and decentralized control. But coming from a systems background-- I actually designed some systems that have to do with open markets and filtering and collaborative work on open innovation about 20 years ago-- I've been watching this evolution for a long time. And I think what we're going to see is a distribution of functions that's going to be a complex mix of centralized and distributed with crossover controls. Financial market systems is an interesting model because it has that same kind of pattern. There are growing cross-platform issues, like the Trump stuff goes onto Twitter, even though he's not on Twitter. And we've got the network effects that drive scale, but obviously we need some distributed control to deal with all the variations of individuals and cultures.
We've also got the perverse incentives of the business model that we need to decouple, which unbundling the filtering does some level of decoupling, you can argue about how much. So, I think we all agree there's no silver bullet. I think we might see it as an early step toward this more diverse distributed view, and I think we're going to see a universal infrastructure where people can post. And I think Cory Doctorow and Mike Masnick are going to talk more about this direction where you want universal interconnectivity, but you want control over the filters to control what you see and our society wants them as well. So, all of these other remedies are going to layer upon that infrastructure. There are also questions buried in this debate that have surfaced a little bit, questions of what do we mean by free speech versus proportionality, and there are differences in Europe versus the U.S. about that. And basic models of what democracy is-- there's different models that give more or less responsibility to citizens and to government.
So, perhaps we can move towards some agreement, and I think I get a sense of it that all of these things make sense and we should find ways that we can jointly promote them to legislators and regulators to make these things happen. So, that's the very general point.
Picking up on a couple of the other points-- which Francis had raised in his piece, and then Nathalie has raised-- which has to do with the difference between blocking moderation and filtering ranking recommenders. And it seems to me that's really essential because it has very different impacts in terms of both what it does to dialogue and the technical effects of how you do it, and where you want control for that to be. So, even though there are extremes of illegal content that need to be blocked, and probably central control is the way to do that efficiently, there's the other extreme that's very discretionary that clearly can be left to ranking and recommenders. And then there's a middle ground that's hard to figure out. And some of that has to do with, if we build good ranking recommender systems and maybe we have some level of coordinated flow control that prevents these cascades of feedback that build the viral extremes-- so that the filters can do their work reasonably and not get blown away by rapid cycles-- then you can have this distributed control and have it work in a way. So, that's something I think we need to move toward, it's obviously not immediate.
Another point I wanted to just pick up on was the privacy issues. And the question of, do you share your friend's personal stuff with whoever's managing this? But to me, the metadata on flows is really more important than the content of what's in those flows for managing it in a lot of ways. And it has to do with some of the algorithmic approaches that I've looked at-- use Google's PageRank strategy, where you're using human judgment and recommendations and reputation to figure out which speech is likely to be problematic and which isn't, and also to use it in your ranking recommenders of what's going to appeal to who. So to me, the metadata that lets you amplify the human judgments that you are sensing from these signals is really the more powerful tool that hasn't been exploited very well. It's been exploited for selling ads, but not for moderating content, and that will help control the virality as well. So you don't have to give up the privacy of the content if these services have access to this metadata, and it can be anonymized so that it's not associated back to the individual in an identifiable way.
And then the final point I wanted to make was on business models-- that just decoupling the filters from the platform gives you a little bit of decoupling. But there are other opportunities, and one has to do with this idea that the ads are using our attention in our data, and lots of people have said, "We should be compensated for that." And there's this idea of the reverse meter, which I think is the central way of how you can pay for these middleware or filtering services. This could work for advertising as well, because it shouldn't just be Facebook or Twitter targeting ads to you. If you had a filtering service, it could be specific to advertising-- where you can set preferences as to what kind of ads you want, what interests you have, what you don't want to see. And you have a negotiation over what ads you're willing to see, and you could be compensated from the advertising revenue because they want your attention and they want your data. And so, there's a price for it. And the price is you have to pay for a service, which is the user agent. So, you pay the user and a share goes to their user agents to fund this process of ranking and recommenders-- both for advertising and for content. My other blog is more on business model stuff and it gets in more detail on ways you can do that.
So with that, as some general comments, I think it's time to open it up for all of you. So whoever wants to go first, I guess we'll have a free for all, and I can comment as seems appropriate.
Daphne Keller:
So, I'm interested in responding on the metadata point and maybe this is a bad place to start, because I'm just going to make things more complicated and harder. So, I think a lot of content moderation does depend on metadata. For example, spam detection and demotion is very much driven by metadata. And Twitter has said that a lot of how they detect terrorist content, isn't really by the content, it's by the patterns of connections between accounts following each other or coming from the same IP address or appearing the same-- those aren't the examples they gave, but what I assume they're using. And I think it's a big part of what Camille Francois has called the ABC framework, the Actors-Behavior-Content, as these three frameworks for approaching responding to problematic online content.
And I think it just makes everything much harder because if we pretend that metadata isn't useful to content moderation, that kind of simplifies things. If we acknowledge that metadata is useful, that is often personally identifiable data about users, including users who haven't signed up for this new middleware provider, and it's a different kind of personally identifiable data than just the fact that they posted particular content at a particular time. And all of the concerns that I raised, but in particular, the privacy concern and just like how do we even do this? What is the technology that takes metadata structured around the backend engineering of Twitter or whomever and share it with a competitor? That gets really hard. So I'm scared to hear you bring up metadata because that adds another layer of questions I'm not sure how to solve.
Richard Reisman:
Well, one quick thought on that is, I would think given your time at Google, I don't know if you got exposed to the people doing their detailed algorithms, but as I understand, they do some amazing stuff with some really hairy math and lots of computation to make sense out of this array of links-- which they use as signals of what they used to call "webmaster" judgment. And then they use patterns of what you actually click on, when you click on search results and all this kind of stuff. And to some extent, I mean, there's an interesting question: is that private data? Because in regular culture, people get reputations for how they behave and we decide who to listen to based on their reputation, not just what they say. So, I think, yeah, you're raising important points, but I think there's a counter argument that that's fundamental to how society figures out what's meaningful and what isn't. So, that's something we need to sort of.
Daphne Keller:
Sure. Although, we can't extrapolate from patterns of behavior on the public web to patterns of behavior among users who have private accounts on Facebook.
Richard Reisman:
Yeah. It gets tricky. Yeah.
Francis Fukuyama:
If I could respond to a couple of the comments, which were all, I thought extremely useful. General response is that this idea about middleware has not been fully worked out and we're continuing to think about it. And so, for example, how is the business model going to work? We don't know, we really don't know. There's a lot of different potential sources of revenue. I think you probably would have to have a regulatory environment that forced the platforms or advertisers to cough up a certain amount of money to make this viable. But that's one of the things that we're trying to figure out. The other one that both Dick and Nathalie brought up, this issue of blocking versus ranking, I think is a really difficult one. And I don't personally have a strong view on that, but let's just put it this way. Do you want middleware to actually prevent you from seeing certain content based on the choice you make? Or do you want it simply to be labeled? And there's arguments to be made in both ways.
So for example, if Donald Trump becomes the 2024 Republican candidate, I just do not see any way that Twitter can keep him off of the platform. I just think that, that would have one of the two major candidates being blocked by this private company in a presidential campaign. I just don't see how that's legitimate, but if you did actually have middleware, you could put them back on, but then you would have a variety. Our ideas, you could have multiple middleware providers and you click on the buttons and which ones you want to see. And if he says something really stupid or wrong or outrageous, you'll get an immediate correction of that. And that might be a way of preventing the blocking of legitimate political speech, at the same time putting it a little bit under control and softening the impact. But like I said, I don't have a clear strong view about how to proceed on that. But I think it's a very important point.
I want to just get back to the large issue that Daphne raised at the end, because that's a central one. And I actually think that in the community of people that follow this issue about content moderation, this is not stated explicitly, but there is this division between people that genuinely want fragmentation and decentralization, and others that either they've thrown in the power and think there's no alternative or they actually think this is a good idea that these big platforms really do have this power, they just want to see the power exercised in the right way. And there's arguments to be made in both directions. This metaphor of the loaded gun that Daphne mentioned was my way of thinking about it, that the power is a loaded gun, and right now the person picking it up, you may think is not going to shoot you, but a long term institutional solution is to take the gun away and not to trust the good intentions of the person that might pick it up.
But let me just point out that there's a legacy media analog to this, which is public broadcasting, because many countries, mostly in Europe-- but also Japan, Korea-- have a public broadcaster, which for them is the authoritative source of true news, as opposed to fake news. And they're generally highly trusted. This is a method that we tried in United States, but our public broadcasters kind of got taken over by the left and they're not regarded as impartial, but for the ARD and the BBC and a number of other European countries, especially in Northern Europe, they do serve as a kind of powerful tool that the mainstream elites have for guiding conversation and establishing what's true. But what's happened in Eastern Europe is with the rise of these populous regimes in Hungary and Poland, and I think this really began with Berlusconi in Italy, you get a populist leader that's elected, and then they take over the public broadcaster. And all of a sudden they've got this big weapon that they can use, this formerly trusted nonpartisan information becomes a very powerful tool. Putin has done this in Russia, in effect. And so, that's a real world illustration of the problem of the loaded gun that many people, I think, imagine that if we had this gun, we could use it, good people would use it, but I just really have my doubts about whether, given the kinds of political forces out there right now, that this is a safe long term solution.
Nathalie Maréchal:
If I may, I'd like to respond, Francis, to your point that Twitter would have to, or should, reinstate Trump's Twitter account if he's the nominee in 2024-- because I have to really disagree with you there. There's no ‘must carry’ requirement for a platform for any kind of individual or entity, there's no right to a platform, certainly not right to the specific platform of Twitter. On the other hand, the First Amendment does protect Twitter's right to moderate its platform according to its rules. And there are plenty of debates to be had about specific rules that platforms have. But the specific rule that Trump was suspended for, I think is perfectly in keeping with the public interest and with the international human rights standards.
And I think to simply say like, "Oh, he's the nominee from a major political party. Okay. We'll just forget that he's been inciting violence and spreading hate speech and undermining elections for years.” I think that sets a really, really dangerous precedent. I mean, would we take that approach with a president of a different country? Say Nigeria, where Twitter still blocked since earlier this year. The president had some tweets that were very clear incitement to violence and hate speech against a minority ethnic group in Nigeria. Twitter removed the tweet pursuant to the exact same policies under which it banned Donald Trump. And as a result, the Nigerian government banned Twitter. So it's still blocked on a technical level in the country.
And to its credit, Twitter has not caved, has not reinstated the account of the president to get access to the country, to that market, which is has great economic cost for Nigeria that has tremendous impact on the freedom of expression and access to information of Nigerians. But what's the point of having rules if the most powerful people in the world aren't subject to them? That's the very heart of the controversy behind Facebook's XCheck program that Francis Haugen revealed in the past few weeks.
Francis Fukuyama:
I don't think there's a legal obstacle to Twitter continuing to keep him off. I just think it's more of a political consideration. And obviously if you went back on Twitter and then said the kind of stuff to incite insurrection on violence like you did on January 6th, they'd be justified in throwing him off again. But I just think as a general rule, there has to be a certain degree of neutrality by these powerful private platforms when you're actually having a legitimate democratic contest. Now, we may all know that Trump is really not a real democrat deep in his heart, but I don't know. It just strikes me as an exercise of private power that is really not very legitimate.
Nathalie Maréchal:
There's plenty of other platforms though, right? I mean, I don't think we need to spend... We have plenty of other things to discuss, but I just really disagree with you about that. The rules are clear, the rules are completely compatible with the public interest and with international human rights standards. And as people are saying in the chat, Trump has never stopped, and he's not going to stop. So I think actually, any neutrality and fairness would dictate treating him the same way as I would be treated, or you would be treated if we were making similar comments on online platforms.
Richard Reisman:
Well, I think this is a really interesting example of where the short term versus the long term view diverge, because right now we have these private platforms and Nathalie's arguments are very sound in that context. But if you look toward a future where the systems interconnect and become a universal utility, where anybody can post into it, and then we have selective filtering services that control who sees what of that mass, then the argument shifts to the idea that blocking someone is an extremely draconian thing that should only be used for clearly criminal things-- inciting violence, child porn, stuff like that-- and should be used very sparingly. And you rely on filters to do that. And to Francis's comment of the question of labeling and filtering, already, there's so much stuff that's filtered out. A lot of it is filtered out anyway. You see only a small fraction of the stuff you can potentially see because the filters rank which of the handful are going to be in your window, or if you scroll down a moderate amount and everything else doesn't come, unless you just scroll, scroll, scroll without interruption. So filtering can be effective if it's done well. And so, to me, in the long term, that's the way to filter out stuff like the harm that Trump does. And then flow controls and things like that would be on top of it. But obviously we're not at that point yet.
Daphne Keller:
So, I mean, I think this gets to a systems design question, which is, if you have this system of like a node that is Facebook as we currently know it, or Twitter or whatever, and then on top of it, they offer their flavor of moderation or ranking and competing middleware providers offer other flavors, is the starting point that Facebook, as the middle point in the node takes down content that is illegal and nothing else, and then feeds out all the legal content to the node providers to do with as they will? I think that's the most politically viable model for sure. But if that is the model, then it's very different from the kinds of truly decentralized or federated systems that Corey or Mike might talk about later, or that you see with Mastodon or possibly with Twitter's project Bluesky, where the idea is truly there is no one central point of control, and different services can apply different rules.
It also introduces, again, the risk of having a choke point if the node-- Facebook, Twitter, whoever-- screws up and takes down the wrong thing that ramifies out to everyone else, if they become vulnerable to influence from a government to quietly do things that ramifies out to everyone else. So there are trade-offs built in there. There's also just a more ‘in the weeds’ design question of, when different countries prohibit different speech who is in charge of geo-blocking, Holocaust denial in France, for example. Does Facebook layer in geo-blocking for illegal content on a country by country basis and feed that out as part of what it's distributing to the middleware providers? I think the answer is probably yes, but there's both operational complexity there and a very consequential design choice about whether there is this centralized point of control for purposes of legal compliance.
Richard Reisman:
Those are all important questions.
Nathalie Maréchal:
I agree with the question, Daphne, I don't know how it would work. But I'm thinking to your point that only proposals that are operationalized can be implemented. And I'm still really stuck on how we make the existing platforms go along with this? Is there a mechanism to legally compel them to? And my sense is no, but maybe there's something I'm not thinking of. Is there a way to structure economic incentives such that they would want to? Are we just using peer pressure or not so peer pressure? Because I don't think Facebook thinks of me as a peer. I certainly don't think of Facebook as my peer. How do we force this to happen? Because if there's no way to make the platforms play along, we're having a super interesting intellectual conversation, but we need them involved to implement.
Francis Fukuyama:
I think you have to use state power to regulate them to force them to open up to permit this to happen. They have to be made to open their APIs such that a middleware provider could plug into it and provide this kind of moderation service. And they probably have to be compelled to give up a certain proportion of their ad revenues in order to support the business model. This isn't going to happen voluntarily. Jack Dorsey claims that he'd love to give away this power because it's such a headache for him. But in fact, they're not going to do this voluntarily. So I think, this is a case where you really do need a statutory intervention to make this happen.
Richard Reisman:
Yeah, that seems to be the case. I'm hoping that maybe Mike Masnick, who's been in touch with some of the people at Bluesky might be able to comment on how that's going and what that's likely to be at least as a demonstration project, if not a serious shift of control towards this decentralization.
Nathalie Maréchal:
Daphne, as the law professor here on the call, do you have thoughts about those specific mechanisms like legal mechanisms that could be used for this?
Daphne Keller:
I had assumed with Francis that this could only happen through state action, that it's not going to happen voluntarily, or if it doesn't won't happen in the optimal way voluntarily. And so, for example, in the Digital Services Act in Europe, there are some amendments on the table in the Parliament to try to force this as part of the mandates, I think just for very large online platforms, but maybe for more. First of all, whether anything is politically realistic in the U.S. is its own question. And then whether if Congress got its act together to pass whatever we all think is the most optimal version of this, would that be a First Amendment violation, taking away editorial discretion of the platforms? There are open questions, but as you know, I've written a lot on how strong the platform's arguments are that they have First Amendment rights not to be compelled to carry content they don't want to, take down Trump if they want to.
I think those arguments are pretty strong, but I think in a universe where they get to have their say and they just have to share resources to let other people have their share resources to let other people have their say also through alternate ranking mechanisms, there those First Amendment objections become much less strong than they are today.
Richard Reisman:
In that vein, I'm a been wondering, the model that I see has this idea it's speech versus reach, which I've also called freedom of expression versus freedom of impression, the idea that users have a right to control what's in their feed and how it's filtered. So I don't know if there's any First Amendment basis where it's already baked in implicitly that there's a freedom of impression of those listening. I understand that amplification is something that can be regulated within limits, but is very limited. Or is it something that would take new law or is it just impossible?
Daphne Keller:
I happen to have an article on this. So I'll answer again, basically under U.S. First Amendment precedent. If Congress wants to mandate that certain currently legal speech can't be amplified as much to reduce its reach, that's just as hard as mandating that the speech be banned altogether. It faces the same level of First Amendment scrutiny.
Francis Fukuyama:
By the way, if I could just make one more comment in reference to Daphne's example about the Kurdish flag. I actually think that there's an international dimension to middleware that is a big selling point. Which is to say that Facebook simply does not have the political knowledge to adequately monitor the political speech of 180 different countries. They just don't and they never will. And one of the advantages of middleware is if there are symbols that are being displayed that are controversial or one thing or another, there are certainly people in Kurdistan or in Turkey or in Iraq that do care about this stuff and they will have the opportunity to offer a very specific service for people that speak their language. They're using Facebook and their territory, and they presumably will have the knowledge to actually interpret culturally what's really happening.
Now, the problem that it doesn't get at is this take down problem that if the platform is actually not permitting people to see legitimate speech, as I think is pretty well documented in the case of India right now, middleware doesn't help you with that. Unless somehow it compels the platforms not to take down things and simply to have them ranked or labeled. But I really do think that we Americans tend to focus just on our problems, but these platforms become a big political problem in many different countries and I do think that we have to think about solutions for them as well.
Richard Reisman:
Well, a quick take on that is if we get to this stage where there is this underlying utility infrastructure, and there are take downs at that level, then there's a case for ‘the take downs should be contingent.’ And those things should go into a holding thing and there should be a process for appeal and redress and explanation of what's going on, and a timeliness requirement for some kinds of things so that there would be limits on what a nation state can do within this infrastructure.
Francis Fukuyama:
Right.
Richard Reisman:
So we're getting to our closing point. The only thing I would close on is while there's lots of differences of opinion, it seems like to the extent that we can influence legislators and regulators in unison to create an agency that can help sort this stuff out, that would be a productive thing.
Francis Fukuyama:
That was actually part of our recommendation in our report that we really do need a specialized digital agency and that this can't just be an add-on to the FTC or the Justice Department, because the technical knowledge about how you would open up an API, for example, for how you would regulate a machine learning algorithm to filter-- that capability just doesn't exist, I think right now, in our current regulatory setup.
Richard Reisman:
Experts could look through models like electronic mail and the financial services industry for how distribution works and how spam control, how circuit breakers work, all that kind of stuff.
Justin:
Dick, I want to thank you for running a great panel there and I'm aware that when I introduced Frank, I used the honorific of professor, but of course, Daphne also a lecturer at Stanford Law School and Nathalie with a PhD from Annenberg's School of Communication. So we're very grateful for all the expertise that's been brought to bear here today. Very grateful to you all for being part of this.