Home

Examining the Meta 2020 US Election Research Partnership

Justin Hendrix / Aug 2, 2023

Audio of this conversation is available via your favorite podcast service.

A unique collaboration between social scientists and Meta to conduct research on Facebook and Instagram during the height of the 2020 US election has at long last produced its first work products. The release of four peer-reviewed studies last week in Science and Naturemark the first of as many as sixteen studies that promise fresh insights into the complex dynamics of social media and public discourse.

But beyond the findings of the research, the partnership between Meta and some of the most prominent researchers in the field has been held up as a model. With active discussions ongoing in multiple jurisdictions about how best to facilitate access to platform data for independent researchers, it’s worth scrutinizing the strengths and weaknesses of this partnership. And to do that, I'm joined by one researcher who was able to observe and evaluate nearly every detail of the process for the last three years: the project's rapporteur, Michael Wagner, who in his day job is a professor in the University of Wisconsin-Madison's School of Journalism and Mass Communication.

What follows is a lightly edited transcript.

Michael Wagner:

My name is Mike Wagner. I'm the Helen Firstbrook Franklin professor of journalism and mass communication at the University of Wisconsin Madison.

Justin Hendrix:

And what is the title that you've used to refer to yourself as part of this unique collaboration between Meta and independent researchers?

Michael Wagner:

I am the project's rapporteur, which is to say that I have been observing many of the meetings that have taken place between the Meta researchers and the outside academic researchers, as well as some internal meetings that's the outside academic researchers have had. And I've also had access to a variety of emails, working papers, the governing documents of the project and how those have iterated over time, the appendices and drafts of appendices, reviews the papers receive from journals, all of that kind of thing.

And the task I really have is to be an independent arbiter of the project and describe how the process worked, whether Meta captured the interests of itself and was able to get the outside academics to pursue things that Meta preferred or that made Meta look good. Whether the outside academics were able to maintain the guardrails that they had set for themselves in the project and whether the project might be a model for future industry academy collaboration.

Justin Hendrix:

And it's that last point that I think we're going to spend most of our time talking about today. And I just want from my listeners to refer to some of the figures that you put in your summary piece in Science, just to give them a sense of the scale of the effort. You mentioned you observed 350 virtual research meetings, talk about Zoom fatigue, two days of in-person research meetings, all of that totaling more than 500 hours. 41 interviews with Meta researchers and staff members of the outside academic team. Interviews with Meta employees, major social science research funders, academic experts. As you mentioned, access to all of the material from the working papers through to the code, and then observations of team members at conferences, work sessions, et cetera.

I would add, I've seen you and met you at multiple panels that discussed whether this particular enterprise in all of its enormity is in fact a model for industry academic research. And that's what I want to talk a little bit about today. In your piece in Science, which was titled Independence by Permission, you write, "I conclude that the team conducted rigorous, carefully checked, transparent, ethical, and path breaking studies." But also that, "Though the work is trustworthy, I argue that the project is not a model for future industry academy collaborations. The collaboration resulted in independent research, but it was independence by permission from Meta." Let's talk about that distinction. What brought you there?

Michael Wagner:

Well, all of those hours of observation and going through different documents that were related to the project, and really I think that that sums up my assessment pretty well. What got me there was observing that on the one hand, the outside academics got to write a number of papers that they would not have been able to do at all, but for this collaboration. I think that's absolutely true. And they would not have been able to engage in platform interventions to survey the number of people they surveyed and the number of times they surveyed them, to link other kinds of data into the project. That just probably was not possible in any other way, if we think about the starting point of the project in the spring of 2020.

So, there was the researchers, the outside academic researchers had control rights over the research design of the project. They had control rights over the way that analyses were interpreted and the ultimate framing of the papers. But Meta could say no at a few different kinds of junctures, one of which was privacy and legal. So, if a user's privacy might be in jeopardy or if a paper would violate some legal agreement, a Meta head or regulatory requirement from a government, that the researchers wouldn't be able to do something. And Meta did not allow the researchers to think of platform interventions that would perhaps augment or try something new.

So, rather than just saying, let's do an experiment like the researchers did where they said, let's have some people deactivate Facebook and Instagram, or let's have them see a reverse chronological feed as compared to an algorithmic feed. But researchers in the beginning, some were saying, well, hey, wouldn't it be fun if we could try to invent different kinds of interventions that maybe Facebook could then use to see if they actually produce problems of polarization or improve, who believes things that are true or those sorts of things? And Meta wasn't on board with that. They didn't let researchers think of things that would create new engineering opportunities to change how Facebook worked.

Justin Hendrix:

So, you say that for social science research about the effects of social media and they're used to be truly independent, you'd have to see the research meet a certain set of criteria. And I want to go through each of the criteria that you set out in your piece in Science, and just ask you to comment retrospectively on this particular collaboration and the extent to which it either met those criteria or did not. So, the first one was, must not be wholly reliant on the collaborating industry, I assume, or company, for the study funding. In this case, did we meet that criterion?

Michael Wagner:

No. Meta paid for the study. And the outside academics decided in the beginning that, since there was going to be an intense amount of scrutiny about this project, they wanted no entanglement with Meta. So, they took no money from Meta. They didn't do what many folks who are in the academy do when they collaborate with social media companies and become a limited term employee or something like it, so that the company pays them, which then allows them to have access to the raw data. That did not happen here. They thought it would be best from their point of view to have no financial entanglement with Meta, and therefore no financial state that was consistent with Meta's in terms of what the results of the project were.

Justin Hendrix:

Next, you say the researchers must have access to the raw data that animates analyses.

Michael Wagner:

So, they didn't have access to raw data of individual users on the platform, because they did not become limited term employees or some other kind of employee of Meta in this case. So, they didn't have access to the raw data. They had access to code. There was lots of code checking. There were things they could do with Meta researchers, sharing the screen in real time to see what would happen if different things were run and that sort of thing. But they did not have what they all normally have when they do work, which is the data on their computer that they can then begin to analyze.

Justin Hendrix:

Next is, must be able to learn from internal platform sources about how the platforms operate. And I would assume in this case, in some of the questions that are put forward here, there is in the public domain evidence that Meta had done its own research on some of these questions. Perhaps not specifically, but certainly in general.

Michael Wagner:

Yeah. Meta does a lot of internal research and had been interested in many questions that were quite similar to the ones the outside academics were interested in. And the Meta researchers know that to be taken more seriously in the academic world, in the policymaking world and in the public domain, they need to have their research perceived as being credible, which is I think the primary motivator to partner with the outside academics. And so, they really felt that there needed to be an opportunity to show off what these researchers can do, both within and outside of the organization, and have that research be taken seriously by all of the different people who might be interested.

Justin Hendrix:

I guess just going back to that criterion though, 'must be able to learn from internal platform sources about how the platforms operate,' do you think they met that criterion?

Michael Wagner:

They learned a lot about how Meta operated that they did not know before, in terms of how long data is stored, what kind of data is stored, how it is gathered, how it is structured, how it is organized, what is required to make that data be put into a format that can be analyzed in social scientific research, which is not the structure that it is gathered in. They learned all of that sort of stuff. And they would not have been able to do it, but for the partnership. The area that is still a black box, and I point out this briefly in the piece in Science, is that some former Meta employees I talked to told me that the Meta researchers were committed scholars, committed research professionals. And when the outside academics asked them questions, they would give them truthful and precise answers.

But the former Meta employees I talked to also said it's really unlikely that the Meta researchers would volunteer anything that they knew. And so, if a really precise question was asked, and the answer to that precise question is no, they would answer it no. And the former Meta employees said there might be other things that the researchers could do and knew that they could do, but wouldn't volunteer that information to the academics. Now, did that happen? I don't know. It's impossible to read the hearts of what the Meta employees knew. There were some things that happened that were consistent with that explanation. In one of the papers that was published in Science, the lead researchers wanted to have access to network data.

I quoted an academic in the Science piece saying something along the lines of, if you want to understand how social media works, you have to understand who follows who. And Meta wouldn't provide that individual level data to that team. After the paper was accepted into Science, one of the outside academics was lamenting, oh, it would've been nice to have the network data. And the Meta researcher said, oh, well, we could have done network data. And the outside academic, and I'm sorry for the clunky language of the Meta researcher and the outside academic, but that's just how it is. The outside academic said, well, you told us no. And the Meta researcher said, oh, well, we have different kinds of network data in another paper we're working on with this outside academic team, with other co-authors.

And so, whether that's an example of Meta researchers not being completely forthcoming or just having a miscommunication, I can't know the answer to that. But it's consistent with the story the Meta former employees told about how outside academics could learn about internal workings. And so, on the one hand, the outside academics learned a ton, just a ton about how Meta works. And on the other hand, I don't think that they learned everything. That's also probably an impossible bar to set.

Justin Hendrix:

So, I suppose that that could mean that when the outside researchers ran into a methodological dead end or some kind of problem that perhaps the Facebook researchers had already seen in looking at some of the same types of problems, it might mean that the answer wasn't necessarily volunteered.

Michael Wagner:

It's possible. My observations of research team meetings though suggested that, if the concern was a methodological one, the Meta researchers were to my reading ready and excited to volunteer their own suggestions. And so, substantive methods conversations happened a lot across the teams. They had a lot of disagreements about methods to use. Ultimately, the outside academics got to pick. Sometimes they were persuaded by advice they got from Meta researchers, sometimes not. But I didn't notice holding back in terms of methods. I only saw potential evidence of holding back when it came to data access.

Justin Hendrix:

Next, you say that researchers must be able to guide the prioritization of workflow. What do you mean here?

Michael Wagner:

So, there are 17 papers. You can't write them all at once. You have to write them in an order. The first four, judging by a lot of the headlines that we have seen come out since the publishing of the papers, are things like Facebook's algorithm is influential, but doesn't necessarily change beliefs. Tweaking Facebook is no easy fix for polarization. Does Facebook polarize users? Meta disagrees with partners over research conclusions. So, there's one. Another is, so maybe Facebook didn't ruin politics. So, a lot of the framing has been Facebook's not the problem here. And the first four papers are the on platform experiments where some people deactivated or had their feed changed or something like that.

And the Meta researchers felt that these were the papers that were probably the least likely to have large substantive effects. So, these are things they told me in interviews over the last three years. They're also the papers that they recommended prioritizing to get done first. A cynic could read that to say, oh, Meta is goosing the big splash to, there's a lot of null results here, or smaller effect sizes, or things that at least don't pin explanations for all that is bad in American politics on Facebook's shoulders. You could also read that as saying, the experiments are the easiest to analyze. They don't require merging in the survey data. They don't require merging in other kinds of voter data. They don't require all the other kinds of things that make analysis take longer.

And experiments are causal analyses, which are more likely to get accepted in a high-flying journal like Science. And so, you can't necessarily say that there's definitive evidence that Meta was trying to have the big splash be a nothing burger, even though the data is somewhat consistent with that explanation. But you can say that Meta, when they would organize conversations about workflow would say, well, we can get these papers done right away, and get these out and sent to Science, and these other ones are going to take a lot longer, which do you want to prioritize? Most people I think would say, let's prioritize the ones that we can get done sooner. And that's what the outside academics also did.

It could also be that they were prioritizing the experiments, because they thought you could do it faster or maybe they were being strategic about where to publish as well. All those things could also be true, but you could have prioritized in another ways. You could have prioritized the paper that does the comprehensive revelation of the Facebook information ecosystem, which is a much bigger lift in terms of data and analysis and writing. And it's also a paper where the lead author was an assistant professor who needs this paper to get out for their tenure case, instead of papers that came out for others where more of the lead authors were already in more senior positions. So, there's other ways to prioritize things. And it was certainly a joint agreement between both sides, but it was one that Meta guided. And I think that that's something that outside researchers will want to be mindful of if future opportunities to collaborate with industry comes before them.

Justin Hendrix:

I certainly want to get into the reaction to the papers and to the company's own statements about the results as well. But before we move on from these criteria, you also point out that some of the project structures that were appropriate to US-based faculty, in this case, I believe most of the faculty, if not all are US-based, are unlikely to apply to other parts of the world. This also signaled to me another aspect in which the company is essentially setting the agenda, which is defining both the geographic and also the temporal bounds of its collaboration. How did this affect the workflow or the agenda of the researchers?

Michael Wagner:

How did the fact that it was just in the US affect their workflow?

Justin Hendrix:

I suppose that's what I'm asking. When you think about the question vis-a-vis the independence of the overall intellectual effort, how did the constraint around it being about US and being about a dataset within a certain temporal timeframe potentially change anything about the outcome?

Michael Wagner:

Well, I think it was focusing. If you were studying a bunch of different elections, which don't all occur on the same calendar, it would be much more difficult and disparate and you would need experts from different regions to do that work. And they were already writing 17 papers and were already three years into the project. And so, I think that limiting it to one country, certainly managed workflow. And you could imagine another model where they said, we're going to do two studies and we're going to do it in six countries. And so, let's prioritize the two things we want to know and do it in a bunch of different places, or in 12 countries or something like that.

But I don't foresee a way they could have without probably a quadrupling of Meta's investment, which I just don't know if they... I mean they certainly could afford it, but I don't know if they would want to do it, to do a dozen plus papers in a variety of countries. But they could have done a more limited number of analyses in a bunch of different countries. They could have chosen that kind of model as well. They limited it to the US, to scholars who were already connected with Social Science One. On the one hand, that's efficient. On the other hand, that's people you could argue are pre-vetted by Meta.

Justin Hendrix:

I suppose I'm asking, because you point out, of course, that, "The collaboration has taken several years and countless hours of time, limiting the ability of the outside academics to pursue other research projects that may have shaped important public and policy conversations." I also recall hearing another researcher, Rasmus Kleis Nielsen, talk a little bit about this idea that there have been, of course, a number of elections across the world and democracies that are more fragile than the US. We haven't seen Facebook put this type of effort in those places. So, I guess I'm asking generally about the extent to which this enterprise somehow constrained the general imagination of the researchers involved to this specific context.

Michael Wagner:

Well, I think that the researchers who were involved, not all of them, but most of them study American politics. So, I think like Josh Tucker and Rebekah Tromble, as examples, also have work in other countries that are more than a one-off. They go where they do that kind of work regularly. But I think demanding researchers to be able to imagine electoral structures and party structures and media structures in a variety of different countries is probably too much of a demand. I think the way that I think the international set of questions and the ones that I've seen Rasmus raise too, come up are one of the guardrails the academic set for themselves is we're not going to get paid.

Lots of researchers in other places could not do that. They could not forego money and have their own private, and when I say private, I mean at their institution, their own resources to hire a PR firm, to hire research assistants. That's not something a lot of other folks who are domain experts can afford to do. And so, there's an issue of access with respect to one of the guardrails the outside academics in the US case set for themselves that I don't think you'd want to emulate in other places. So, that's one reason I don't see this as a model, is I don't think that all of the guardrails the outside academics set for themselves would apply everywhere else.

Justin Hendrix:

Thank you for helping me understand that. So, I do want to talk about the reaction. According to The Wall Street Journal's Jeff Horwitz, we know that some of the researchers on the project took issue in particular with Meta's characterization of the findings. Did you read Clegg's statement? Would you have similar concerns?

Michael Wagner:

Yeah. And as I told Jeff, because the journal Science also objected to his interpretation, Nick Clegg's interpretation. And I said to Jeff, which he published in his article that Science was right to disagree. I think that some of the claims in the statement that Clegg made go beyond what these papers show and what the larger body of research about social media and democracy have to say. Meta wanted to release Clegg's statement before the papers came out. And the PIs on the outside academic side, Josh Tucker and Talia Stroud vociferously objected to that. And Meta held back and didn't release Clegg's statement until later. And the outside academics had their own PR team. They did not and would not meet with Meta's PR team. They didn't want Meta's PR team involved in their discussions about how they were going to characterize the analysis.

And that complicated communication coordination, because they've been collaborating with Meta researchers for years. And these people have come in many cases to trusted collaborative relationships and wanted to coordinate about, hey, if they ask a question and it's better for Winter Mason at Meta to answer, or is it better for Andy Guess, one of the outside academics to answer, how do we want to steer a reporter to the best person to answer the question. And navigating those issues while also navigating the larger issue of Meta PR is going to do what it's going to do, was a point of concern for the outside academics. And I would say the Meta researchers also understood that. They didn't push, they dutifully raised what Meta PR wanted to do and the outside academics said no, and they moved on.

Justin Hendrix:

So, just to focus in a little bit on what Clegg was trying to assert, he said the four studies, "Add to a growing body of research showing there is little evidence that key features of Meta's platforms alone cause harmful affective polarization or have meaningful effects on these outcomes." So, Clegg has repeatedly over the last few years set up and knocked down this straw man, this idea that some folks out there are saying social media is the sole cause of polarization. And he seems to suggest in this statement that this is just another set of bullet points in this growing body of research. Is that's what's happened here?

Michael Wagner:

I don't think that's happened here. I mean we've had measures of polarization in the United States wax and wane since the founding of the Republic, which preceded the development of social media by a couple of centuries. And so, I don't think anybody who is serious about questions of polarization, extremism, political identities and how those identities foster dangerous problems for democracies. I don't know anybody who says, well, social media is the problem here or is the cause here? People don't really say that. And so, to make an argument that the papers say that Facebook's not the silver bullet problem, is not surprising because no one was arguing that going in.

I think that some of the outside academics thought there might be some larger effects in the platform experiments. And I think some thought that polarization levels might be reduced for those who deactivated or knowledge might improve under some experimental conditions as compared to others. And some of those things didn't quite materialize or didn't materialize to the effect that they thought that they might see. But it's also the case that doing a study in an election environment is probably the time you're the least likely to find a reduction in polarization.

Justin Hendrix:

I suppose there's also a question in my mind just about duration, just three months, in such a highly important moment from a political point of view just strikes me as unlikely that there would be terribly much change at all.

Michael Wagner:

Yeah. I mean in a perfect world, you would've started the study two Januaries before the election, and have some people do the experiments early on and some do them later. And see whether it's the timing that makes the difference or the contours of the campaign that make the difference, or whether it's an interaction between those things and what social media platforms offer their users. But the opportunity came in the spring. And I came on board in June of 2020 to start observing meetings. And they were just in a race to design studies, get them pre-registered and get IRB approval from their universities and a private IRB firm to get the studies going. And so, the practical problems I think also guided the nature of this.

Justin Hendrix:

So, as you've already pointed out, much of the media reporting on the first four papers with some notable exceptions have seemed to benefit Meta and benefit that narrative that Nick Clegg has espoused. Do you believe that the project has been undermined by the company's distortion of the research findings?

Michael Wagner:

I think it's too early to tell. I don't think the company's framing helped in any way. I think the one thing the outside academics have going for them with respect to this question is many people who are reporters, politicians, party activists, civic activists are already pretty skeptical of things that Meta tells them. And so, the framing is probably not likely to affect those who are the most committed and most involved. But the people who pay occasional attention to what's happening in the news or in politics, and they see a headline that says, oh, a bunch of independent researchers found that Facebook didn't cause polarization, oh, interesting. There could be an effect.

But as you pointed out in your comment about duration, how long these effects last from this splash, I think are an open question. There are still at least 13 papers in the pipeline that will continue to come out. Some of those are less likely to make Meta look as good as these first four papers have been framed to make Meta look by Meta and some news reporting. And so, I think it's an open question, but yeah, it's an open question.

Justin Hendrix:

This is all taking place against a fairly high stakes international policy discussion. There's a lot going on. The EU is trying to decide how to hold platforms to assessments around systemic risks. There are questions about risk assessments in the UK online safety bill. And then there's rules about researcher access to platform data under the DSA. That will be a requirement. All the specifics are still getting worked out. You've got proposed legislation in the US, the Platform Accountability and Transparency Act. Have you thought about policy implications of this research model? Does it tell us anything about how perhaps researcher access should work under regimes like the DSA?

Michael Wagner:

I think it's moved us in a productive direction. And I think one other part of the project that hasn't gotten much play so far in my first draft of my policy form article in Science, I mentioned this, but word limits that I ended up having to cut a bunch of stuff. And this is one of the things I cut, is that some of the outside academics on the project are longtime critics of Facebook and Instagram, or Meta, or social media platforms more generally. And we're participating, I think as observers, and participants, and as folks who were interested in trying to figure out how can we help design better regulations and policies to provide researcher and public access to data. And so, I think some of the lessons that we've learned are that researchers need more unfettered access to the data.

Companies will argue aggressively and confidently that, hey, we need to protect the privacy of our users, we need to abide by current regulations. And it's difficult to trust that a bunch of academics who aren't affiliated with our company can be trusted with the data that is as sensitive. The workaround in this project was let's collaborate with Meta and have Meta have access to that data. I think that one thing that would really help is if there could be regulations devised that protect workers at social media platform companies, especially those who are doing research to be able to share more data to offer more forthrightly about internal workings inside the company.

This, of course, is not something companies are going to want, because it just puts them at greater risk for bad news to be learned about them. But those researchers are good social scientists, and they need more protections to foster a better and more open data transparency. And I think it's also worth starting to put minds together in a serious way to think about issues of privacy and how those issues can be solved, while at the same time providing raw data to researchers who can do work independent of what the social media companies might want or do, or pursue on their own, or spike if it's bad for them.

Justin Hendrix:

And I certainly agree that some of the individuals involved in this, I'm thinking of folks like Rebekah Tromble, have put enormous amount of thought into what these protocols should look like, have certainly moved the ball forward. And I'm certain this has helped her and others involved in the project to think through potential pitfalls. So, it will have great value in that way.

Michael Wagner:

Yeah. I think one thing that they're learning, it also relates to the kind of data that you can have more access to. Doing qualitative analyses of what people say in posts, really I think makes Meta nervous for privacy reasons plus public relations reasons. Whereas, aggregations of people liking misinformation posts or sharing untrustworthy content is a little bit easier to share. And some of the things some of the researchers have wanted, is they want to build classifiers and understand hateful language or coordinated and authentic behaviors to do more qualitative dives into Facebook posts and Instagram posts. And that's an area where there's been more reticence from Meta.

Justin Hendrix:

One of the things that I suppose Clegg's spin on these results makes me concerned about is the extent to which the platform may have learned any lessons from the results of all of this research. And of course, you can't disclose to us unpublished findings what may be coming. But do you have a sense that the platform itself has learned anything from these results that may be useful in the 2024 election cycle in the United States?

Michael Wagner:

I think that the time it has taken to get these first four papers out have diminished the likelihood that major platform changes are going to occur before the election, that are direct results of what we've learned in these studies. I also think that the original set of guardrails that prohibited Meta employees who were not a part of the research team to know the results until a short period before publication also prevents that. So, the first time that Facebook brass were supposed to have access to these first four papers were a few weeks before publication date. And so, it hasn't been the case that Meta researchers were to be sharing the results inside the company. Just as when outside academics go to conferences and you've been to these two and seen them, they present their research designs and say, we've agreed not to share findings until the papers come out. Because it's taken so long, they've amended this rule.

And now that some papers have come out, they're going to allow researchers if the lead author who's always an outside academic wants to, to go through the process at Meta to the privacy review to say, I want to go to a conference and I want to share results and here are the slides I want to show. And then the privacy review team is supposed to look at those and say, does any of this reveal individual identities or violate some kind of regulation? We have to abide by it. Then they say, okay, go ahead and do it. But turnabout is fair play there. And if the academics get to do that, then Meta researchers get to do that too inside the company. So, it's possible that Meta executives and others in Meta will have access to some of the in progress papers, in a way that comes to them earlier than these first four papers came to them in terms of what they were able to learn and then decide what to do anything about going forward.

Justin Hendrix:

Again, without asking you to disclose any findings of papers that are to be published in the future, I know that the duration of data access was extended following the attack on the Capitol on January 6th, 2021. Can we expect research results that will have some bearing on understanding Facebook's role in that event?

Michael Wagner:

The research design was originally, I think had five waves of panel survey data of Meta users and others. And then they extended that to a sixth wave to come after the January 6th attempted dissident coup at the Capitol. So, there is the opportunity to look at a large number of consented Facebook users and their attitudes pre and post that event across a variety of questions. And so, they're set up to do some of that work.

Justin Hendrix:

So, I guess we'll leave that one as a cliffhanger and see what happens to come out of these next dozen studies.

Michael Wagner:

To be continued, I suppose.

Justin Hendrix:

I'll ask you one last question. When it comes to the question of social media's impact on society, on our politics, are you, Michael Wagner, sleeping any better at night on these issues after having been part of this process?

Michael Wagner:

That's a good question. That's a good question. Let me think about this for a second. I'm sleeping differently, not better or worse. Some of the things that I was concerned about, I'm less concerned about. Things I didn't know to be concerned about, I'm now more concerned about. I am heartened that there is such a large number of researchers and journalists who are interested in these issues and are willing to push on them. I am impressed by the quality of researchers that work inside platforms. And I'm sensitive and worried about the constraints that both they have and the constraints that come to outside academic researchers who want to understand how social media interacts with public life across a variety of domains. And so, I'm sleeping no better. And I don't know that I'm sleeping worse, but it is different.

Justin Hendrix:

Well, I suppose I'll rest for now with my questions. And I thank you so much for taking the opportunity to tell my listeners about your role in this extraordinary project over the last three years.

Michael Wagner:

Great talking to you again, as always.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics