Home

Donate
Podcast

Why Independent Researchers Need Better Access to Platform Data

Justin Hendrix / Nov 9, 2025

This podcast is part of “Seeing the Digital Sphere: The Case for Public Platform Data” in collaboration with the Knight-Georgetown Institute. Read more about the series here.

Audio of this conversation is available via your favorite podcast service.

This episode was recorded in Barcelona at this year’s Mozilla Festival. One session at the festival focused on how to get better access to data for independent researchers to study technology platforms and products and their effects on society. It coincided with the launch of the Knight-Georgetown Institute’s report, “Better Access: Data for the Common Good,” the product of a year-long effort to create “a roadmap for expanding access to high-influence public platform data – the narrow slice of public platform data that has the greatest impact on civic life,” with input from individuals across the research community, civil society, and journalism.

In a gazebo near the Mozilla Festival mainstage, I hosted a discussion with three people working on questions related to data access and advocating for independent technology research:

What follows is a lightly edited transcript of the discussion. Thanks to the Mozilla Foundation and to Francisco, the audio engineer on site at the festival.

Top left to bottom right: LK Seiling, Justin Hendrix, Peter Chapman, and Brandi Geurkink. Photos by Julie Anne Miranda-Brobeck/KGI

Justin Hendrix:

The first thing I always do is ask folks to just state their name, their title, and their affiliation for the record. Brandi, perhaps I'll start with you?

Brandi Geurkink:

Hi, I'm Brandi Geurkink. I'm the Executive Director of the Coalition for Independent Technology Research.

Justin Hendrix:

Peter.

Peter Chapman:

My name is Peter Chapman. I'm the Associate Director at the Knight Georgetown Institute at Georgetown University in Washington, DC.

Justin Hendrix:

And LK.

LK Seiling:

Hey, I'm LK Selling, and I coordinate the DSA 40 Data Access Collaboratory at the Weitzenbaum Institute in Berlin.

Justin Hendrix:

Okay. I have to tell our listeners we're not in my normal podcast mode. I normally either am recording in my office in Brooklyn or sometimes in my basement at home. We're in a quite better sort of situation. We're in a gazebo in Poble Espanyol in Barcelona, just outside of MozFest. Peter, you want to describe for the listener what's around us at the moment?

Peter Chapman:

Sure. We are basically in a castle. There's the large MozFest main stage just behind us where we've seen speakers from Ruha Benjamin onward over the last two days of MozFest.

Justin Hendrix:

It's a fun spot, has been a good set of conversations over the last couple of days, and looking forward to this one. We're going to talk a little bit about data access. We're going to use the occasion of the publication of a new report from the Knight Georgetown Institute, the Better Access Framework, as part of the basis for the conversation. But then also bring in some recent work that Brandi and the coalition have done, including a report from that community as well, and hear a little bit more about what LK is up to, in particular.

But I just want to ask you all three a basic question before we get going. This is a somewhat niche topic, data access. We've, gosh, spilled lots of ink on it at Tech Policy Press because I think of it as a kind of fundamental and a really important one. But I want to give each of you an opportunity in maybe incorporating a little bit about what your organizations do and why you work on this. If you think of this microphone as literally being connected to the ear of the listener, if you were going to whisper into the ear of the listener, why does data access matter? What would you say? Perhaps, Peter, I'll start with you.

Peter Chapman:

KGI is an independent institute, and we're focused on connecting independent research with technology policy and design. To do that, we need independent research, and access to data is simply fundamental for journalism, for civil society, for academics, to understand the nature of conversation on online platforms. Online platforms shape what we know, how we connect, what we hear, what we amplify, and across a range of themes, their fundamental infrastructure for modern civic life. The ability to understand these platforms, understand conversations taking place in these platforms in real time requires access to the public data that these platforms host.

Justin Hendrix:

Brandi, what about you?

Brandi Geurkink:

The coalition is really a movement of independent researchers who are working to build the power and influence of independent voices as really a counterweight to the technology industry in conversations about the impacts of that industry on our lives, in our communities, in society overall. If you look historically at virtually any other industry, take the tobacco industry, take the fossil fuel industry, scientists and researchers who have been empowered to ask fundamental questions about the impacts of those industries on our bodies, on our health, on the health of our natural environment, has been the cornerstone of consumer protection law, of regulation of those industries that have made our communities safer. That's what we need with technology.

Data access, in that way, is about fundamentally reshaping the power dynamic that exists so that independent questions that really matter to everyday people can be answered. Because if we don't have data access, the only questions that can be asked are those that can be asked by the technology industry, and they're going to ask the questions that matter to them. They're not going to ask some of the most fundamental, pressing, hard questions that you and I and people listening to this really care about. The results of those questions and the way in which they're presented are also going to naturally be biased towards the industry that's asking them.

That's why independent research is so critical and why it's fundamental to helping people to have a better experience on the internet ultimately. Because when we are able to actually ask these questions, then we can not only do what Peter was talking about in terms of really understanding the online environment, but we can also ask questions that help us imagine how things could be better outside of the constraints of what might be the best for the profit margin of a company.

That's why it matters to me. It's about democratizing power and access to information and the ability to ask questions that result in better experiences for the everyday person who's using the internet.

Justin Hendrix:

And LK, what about you? Just maybe also in context of making the Digital Services Act work. I mean, that's part of what you're doing at DSA 40, the Data Access Collaboratory. Why does this matter for making that regulation work?

LK Seiling:

I guess, in the first place, I think it is a prime example to study how regulation translates into practice to see, okay, we do have this law now, which is crazy also if you think about it. It's like the first time we actually have a right to get access, not just to governments and public data from public services, but to actually have access to privately run companies. I think, for one, it's an interesting research question to see how this translates and where like different hurdles might be.

But then again, and as Peter and Brandi already said, it's quintessential just to do basic research on it. We need to know how does it work, how does it not work, what else do we need? Yeah, I might be spoiling things a bit when I say that things do not work as we would want them to work.

Justin Hendrix:

Peter, I want to come to you and ask you a little bit about this report and this framework that you've just put out. In full disclosure, played a minute role in kind of helping to review the document, as did Mark Scott, who's a contributing editor at Tech Policy Press, and many other experts that you were able to pull together to work on this for what seemed like the better part of a year. It might have been longer. You'll tell me, I can't quite recall when things started, but you've just published this. What's this for and what are the top lines?

Peter Chapman:

Great. Yeah, thank you. It's been about a year, and I think taking a step back about why we need a framework like this, better access, data for the common good. Research, as we've just heard, is being frustrated by these companies. We're seeing multiple avenues through which companies are cutting off access to public platform data at the same time that regulation here in Europe is, as LK just described, offering new opportunities for data access. Companies are pulling back from some of the tools that had previously been available.

Meta had acquired a platform called CrowdTangle, which provided real-time data analysis from Meta platforms. They ended that product in August of 2024. X had introduced significant new fees for its API. Reddit has changed the way in which folks can access its API, including the research community.

There are a couple different motivations for this change. I mean, one, as Brandi described, is these tools enable platforms to be scrutinized. If you look at the universe of research on platforms, Twitter and Facebook historically had the most liberal mechanisms for access, and they've been the most studied so we know most about those platforms. There is maybe a perverse incentive with platforms by taking off or eliminating some of this access, they can restrict the scrutiny that the platforms are exposed to.

At the same time, there's been an absolute rush for this exact type of data, public platform data, as generative AI models are actually built on this publicly available data on the internet. We've seen a rise of third-party tools. We've seen a rise of platforms trying to commodify this data that you and I, we all contribute in our online footprints.

Then thirdly, there was an ongoing challenge in the research community to ensure that research is done ethically and in privacy respecting ways. The scandal around Cambridge Analytica in 2016, '17, '18 really exposed how this data could be used in pernicious ways, and the research community has been continuing to evolve ethical standards.

All of that is context for this group of 20 experts coming together, academics, folks from civil society, journalists, to identify and articulate a framework for public platform data as a sort of minimum expectation for the data that we need to understand online platforms. The group coined this term high-influence public platform data, and this is data that by virtue of its reach or its engagement, or the status of the speaker, or the account matters most for civic life.

That includes things like highly disseminated content. It includes government and political accounts, notable public figures like journalists or influencers, as well as business accounts and promoted content. This data online impacts what we see, what we hear, what we know, and plays an outsized influence in the information environment.

The group analyzed the research about the distribution of this content, looked at the power law dynamics on social media, finding that a very small amount of content make up the most views, most reach, most engagement on our social platforms, tried to establish what that minimum expectation is to understand, and then grapple with the trade-offs of how do you ensure meaningful access by researchers around the world? How do you ensure both proactive disclosure of this data from platforms, but also independent collection from researchers?

Justin Hendrix:

Wasn't necessarily an easy process. Not everyone agrees on every topic with that. What were some of the key challenges you felt like you had to come to consensus around, things that people didn't necessarily see eye to eye on or that felt like they needed more conversation than others as you tried to arrive at this common framework?

Peter Chapman:

I think a fundamental challenge is the uneven understanding of how these platforms interact with information environments around the world. The group really felt fundamentally we needed a framework that was durable in a region of Congo as a region of California. Historically, that's not how data access has worked. You've had global access, you've looked at dominant narratives mostly in Western societies, and you've not looked at what's happening in West Bavaria as opposed to Germany. By creating or articulating these information environments, global, regional, linguistic information environments or geographic information environments, we try to enable research that fits the need of the journalists, of the societies, of researchers in different contexts. That was definitely a challenge.

I think this ongoing debate around trade-offs between privacy, researcher ethics, and public data, these are very difficult issues. The framework that we have articulated does not resolve the risks of private information being treated as public by platforms or researchers. But what we've tried to do is narrow that risk by leaving out the vast majority of public content, of content that's publicly available online, and focusing on these actors that, by virtue of their status or dissemination of content, have the greatest impacts of what we see and know online.

Justin Hendrix:

One of the things that we're trying to negotiate here is really the practices of researchers, the ability that they have to access information, and to be protected. One of the things, Brandi, in your report that you talk about is this kind of gulf between where we're at, those who build the systems, those who live under them. In between are researchers who are trying to get access to the information, often unclear about what is legal, what is in accordance with the terms of services they might have to sign up to in order to have access to platforms. You say it's a crisis. What are the dimensions of that crisis and how does some of this thinking around data access fit into that or help resolve that crisis?

Brandi Geurkink:

Yeah. I want to start answering that with maybe an analogy, which is if you're part of a community that is living close to a river that's being polluted by a company that has a factory situated on that river, there's a few things that you might do. There's amazing community organizations that do work like this, which is take a bucket over to the river, collect the water, send it to a local lab that might be able to test that water, and then take that lab result to a court and challenge the factory that's been polluting the river that you live right next to, that your community relies on for sources of clean water, clean soil, the food that you eat, all these things. Because we have environmental regulations, certainly the community that I live in and I think in the many communities that those of us sitting around this table and listening to this podcast live in, there's something that you can do about it.

That is fundamentally not the case. We have no similar analogy right now that actually empowers people who use social media platforms. People who interact in some way with social media platforms, we are treated as data subjects that is to the benefit of the companies that are profiting from these technologies. That is fundamentally wrong. We need to reshape the crisis that we speak about. I think that's ultimately the Gulf, is this dynamic that we're in, this fundamental inability that doesn't really make sense just because one is a physical river and one is a digital environment that we're engaging with.

There are hard-fought choices that have been made to enable people to go to the river with the bucket and collect the water. There are regulations that enable people to hold a company accountable if that company is poisoning their community. All of those things have had to be fought for. We're at the beginning of that road, but the gulf is not different than it has been in so many other fights that communities have won over time as well.

I try to come back to analogies like that because the dimensions of the crisis, you can get lost in them. We're talking about weird lawsuits about terms of service violations for scraping. I'm thinking about all of the ways that activists trying to take the bucket to the river have been come at by industries that they've gone up against, and seeing the parallels in our space, because I think it's important to think about and to remember the overarching power dynamic that we're actually trying to go up against in this work. Then when you have that in your mind, you start to see the cease and desist letters, the lawsuits, the ways in which social media companies have blocked the accounts of researchers trying to do this work. You see it in that fundamental analogy.

I think that that's what we need to realize is not to get too caught up in the details. I mean, we're researchers, so of course we're caught up in the details. But what is the overarching power dynamic that we are trying to push back against and equalize, and where do we fall within that?

Justin Hendrix:

LK, I want to ask you a question as well about what you're up to at the moment, just in terms of testing the boundaries of the rights that are afforded under the DSA. Maybe for the listener's sake, just give us a quick rundown where things stand. I mean, I was talking to you at this session earlier and asking just that basic question. This law's been in place for a bit. Has any data been liberated yet from a platform that researchers can study? If not, when can we expect that might happen?

LK Seiling:

I would be lying if I said that no data has been shared. I don't know if that has been liberated thus far. The DSA basically sets out two kinds of data access. One is set out in Article 4012, which basically says that researchers, if they fulfill a set of requirements, like they are independent from financial interests or that they can disclose their financial interests and they can safeguard the data properly, that they can access what is called publicly accessible data.

This is not further defined, which is interesting. This lets already the platforms define what they mean. This is where Pete's report comes in with regards to, okay, I'm potentially expanding this definition which has been rather narrow. Basically, all you get right now is the aggregated interactions on specific kinds of content. You get the number of likes, the number of views, you might get the comments if you're lucky, but that might already not be included. There's different amounts of data that different platforms share under this publicly accessible data. You could argue, again, that this is a much broader category, right? This is where I would see the first area for liberation to try to push back on these initial boundaries that the platforms have set.

The other data access is set out in Article 44 of the DSA. It doesn't say non-public data, but it kind of means non-public data because it just says generally, you've got the right to access data. This has recently been specified by a delegated act, which actually came into force on the 29th of October, very, very recently. Since then, researchers can actually apply with their national supervisory authorities, like the local digital services coordinators. Then the digital service coordinators go out and check the research question and the data that they want to ask for, and they can theoretically ask for any kind of data. We're talking internal documents, we're talking individual exposure history, so what did individuals see. We were talking much, much, much, much further, basically, all the kind of data that platforms might have.

We're probably going to need to wait some time to see how this will pan out because the regulators seem to be rather, let's say, careful because they know that if they fumble this start, that this might be like even more detrimental to the entire project than rejecting a few requests at the start. I'm expecting the first decisions to come in at the end of the first quarter of next year. This is also going to be interesting because there are some questions which researchers have already asked. Algorithms which has already put in, for example, requests under Article 44 DSA, but there's also requests which researchers are planning which take this non-public route, but they're actually asking for things that should be public. We see also the regulators navigate this space where they, again, engage in this kind of boundary work with the platforms to liberate the data.

I think this is where researchers and regulators need to at least communicate and coordinate in order to do this collaboration because as it stands right now, there is not a lot of data that is at least accessible. I'm not saying that it should be freely accessible to everyone, but I'm just saying it should be accessible if you can provide the proper mechanisms to safeguard it, but this is not even the case so we have a lot of work to do.

Justin Hendrix:

I wanted to ask you as well, what should we know about how we got here? I mean, what are the kind of precedents for this current framework? How did this come to be particularly in Europe?

LK Seiling:

I mean, Peter already talked a bit about it with regards to the Twitter API. To be honest, I don't think there's a lot of precedents to this, at least not in the formalized sense that the DSA puts in place. Yeah, what you could reference were these early programs like Tech Data, but then also Social Science One, right? But this did not serve the wider research community. This led to a situation where most of the US researchers got privileged access to data which would further their individual careers. Again, no shade. This is how the academic system works. But in the end, it did not contribute to a systematic change in how we come to understand these platforms.

I think that the DSA, in theory, marks a departure from that because everyone who researches what is called systemic risks inside the European Union is eligible for data access. Which means that anyone, they may be in Britain, they may be in the US, they may be in Australia, or even in India, they can ask for data access under the DSA so long as the research request is related to the EU.

Again, we've seen from the data that we've collected with the Collaboratory that the platforms do reject these requests, which we think is not understanding or interpreting the law correctly. I think we'll see to what extent the DSA can actually live up to that potential of democratizing data access.

Justin Hendrix:

I mean, this is the big question, can this work? Does this DSA model work? Is the mental model that Brandi's created for us that will go to the river, get the bucket, bring it to the scientists, and change will come. Peter, are there other obstacles to that that you see in the near term? I mean, I know the framework is meant to help clear some of those obstacles, but what are the other obstacles to making that somewhat simplistic model that I've just described work?

Peter Chapman:

I think it's important to underscore that no one data access mechanism is going to answer the range of questions that researchers have with platforms. I think it's also important to mention that in the context of the DSA, many platforms have built new processes. Meta has deprecated CrowdTangle, but has built a Meta content library that does provide some researchers access to data.

What we've tried to do in the framework is articulate three primary access mechanisms where platforms proactively provide data through a proactive data interface. This could be them individually or also them supporting a third party to provide this type of access. We envision a world where platforms should respond to custom data requests from researchers. Researchers in a particular environment, looking at questions that are outside the scope of the proactive data interface, can look at publicly available or high-influenced public platform data through those requests for a data set or through an archive. Again, there's precedent for this. Twitter has hosted archives in the past. SOMAR at the University of Michigan has data archives to understand different platform dynamics.

Then there's this independent collection piece where researchers either build their own tools to scrape or crawl data from these sites. I think there, it's important to underscore, with the rise of generative AI, the rapid proliferation of third-party tools providing just that. There's a booming commercial industry providing brands, providing generative AI developers, providing folks who can pay for access, access to this data. It's not like it's technically unfeasible. It's just a question of whether researchers should be able to have independent, free access to this data to look at questions in the public interest.

I think when you look across regulatory models emerging, I think we expect that in the UK, data access is a focus. There's a task force that's being developed. I think from what I hear from the discussions there, they're learning lessons about the DSA infrastructure and the costs of building some of this infrastructure. Because a different approach would be saying, "We want to clarify legally that terms of service do not prevent independent researchers from going and analyzing platform dynamics."

This is exactly actually what was agreed to with AliExpress under the Digital Services Act enforcement framework. They've said, "We'll change our terms of service. We'll enable public data research." Actually, we went the other day and looked at their terms of service, and there is an explicit carve-out for DSA access. That would give many NGOs, many researchers around the world confidence, that them going and looking at these public conversations are not going to be caught up in legal wrangling or potential legal challenges.

Then in the US context, I think a lot of the focus is on PATA, the Platform Accountability Transparency Act at the federal level. There also have been state-level proposals, and increasingly a lot of those are oriented around AI and generative AI, which platforms are using algorithmic recommender systems to surface and distribute content, and so there's a lot of overlap between some of the transparency efforts that are being discussed in the US context. I think there's no one-size-fits-all. The DSA, in my view, is probably not going to be replicated around the world, but there are multiple avenues that offer short-term incremental opportunities to open up access while we also look at longer term, more holistic solutions.

Brandi Geurkink:

Can I come in on the question of, is this going to work? Because I have a few thoughts on that. One is about the threat of corporate capture within this entire process. As I spoke about earlier, platforms are going to do everything in their power to resist giving researchers access to data. They have no incentive to make this happen. We've seen that pressure for them to do this voluntarily has failed, which this is the recognition of why the DSA Article 40 exists is that recognition that we have to mandate this kind of data access if we're going to get it at all.

All of those systems that rely on a regulator, a state body to intermediate that have the potential to be captured by the corporations. Even if not captured, there's the issue of legal challenges. We can expect to see lawsuits that are going to be filed challenging the requests on the basis of intellectual property grounds, challenging on the basis of privacy grounds, challenging on the basis of legal privilege grounds. We will see all of those things happen. I guess, is this going to work? We got to be prepared for that, and then it might.

I think from the perspective of the research community ourselves and as researchers, there's also going to be kind of a fundamental culture shift I think that needs to happen because we've been in a situation of mother may I with the platforms where it's been very much, and as LK alluded to, seen as a really nice privilege if you get to have some access to data to write the research paper that you really want to write that advances your career and provides definite benefit in those kinds of answers to the public.

What the DSA offers with Article 40 is a recognition that this research is fundamental to the protection of European democracy and European societies. That researchers play that critical role in it, and we actually have a right to access this information that is required to play this critical role that researchers play vis-a-vis the public. For us to shift from this situation of data as a privilege to data as a fundamental necessity for us to do our jobs in service of the public interest is going to be a culture shift that needs to happen within the research community to be ready to step into that role. A lot of that is what coalition members are doing amazing work to propel, but I just wanted to add those two things as additional dimensions in the question of will this thing work?

LK Seiling:

If I can pick up on that, I would completely agree. I think there's more cultural shifts that need to happen. I mean, while there is a culture of collaboration, it is not really the case because the structure incentives do not give you and your research group a professorship. They give individuals a professorship. I think that researchers are quite happy if they can do their research by themselves, but they're up against a lot of structural asymmetries with regards to both the regulators, which is in this case like the supervisory or enforcement authority of these data access procedures is the European Commission, which is, in itself, a very politicized body which has led to worries of them maybe sacrificing it, maybe compromising, maybe only enforcing some of the data access procedures.

Justin Hendrix:

Or only sort of enforcing the DSA altogether.

LK Seiling:

Exactly. I think that researchers need to come to understand themselves, also then contrast with a longstanding tradition and post-enlightenment research, basically, that you're standing outside of the context. You're looking in, you have like a bird's eye view on things, but now you're in the midst of it. You are political actors in it, and the evidence you produce might lead to change, but might also support the kind of political ambitions of the European Commission.

What I'm trying to do also with the work that we're doing at the collaboratory is to sensitize researchers to this role and to get them to think about, "How do I position myself within this, and where are the people standing left and right to me that I can join up with to put more pressure on not just the platforms, but arguably also the regulators?" Because in the end, these are the ones who will have to either find or put other sorts of pressure onto the platforms based on the evidence that we produce and the problems that we highlight.

Peter Chapman:

I mean, this brings me back to the reason why we supported this process to develop the better access framework, which was giving researchers, people who deal with this data every day, journalists in newsrooms who are reporting with publicly available platform data, giving the space to articulate in policy language what data we need and want from platforms to understand our information environment. The DSA doesn't solve this. Article 4012 just says publicly available in the interface. What is that? What does that mean? What are the privacy and ethical implications of that data? Bringing a group of researchers together saying, "No, this is actually what we need as a bare minimum understanding. If you're not providing this, you're not providing enough. You're not providing nearly enough."

Justin Hendrix:

I want to cast our minds just forward a little bit, maybe in closing here, and try to imagine a little bit of the future, what things might look like perhaps when we've got a few years under our belts on this. What do you think it looks like, LK? What do you imagine? Are there lots of PhDs being minted, of course, on data that's been provided by platforms through these mechanisms or collected independently through these mechanisms? What will it mean, I suppose, for the future of the way we relate to technology?

I keep thinking with artificial intelligence at this beginning point, I suppose. It's almost like we're at a kind of inflection point. It feels like we're seeing it pulling away almost of what the industry knows and is capable of in terms of producing new models, technology that affects society, and what independent researchers are able to scrutinize from the outside. What does the future look like in 5 years, in 10 years' time, if we get this right?

LK Seiling:

Yeah. I mean, if we get it right, PhDs will be minted for sure. But to me, this is not clear, especially if you look at the way that the US administration is behaving and the way that the European Commission also is engaging in... How do you say?

Justin Hendrix:

Tradecraft.

LK Seiling:

Exactly, tradecraft with these companies. I think that from what we see, it will be like a very, very time-consuming and resource-consuming uphill battle to twist every bit of data out of the platforms. I'm not sure if this will be sustainable in the long term.

We might see a culture shift also of when the platforms do not feel that like they are backed from like the White House and that they might actually have more incentive to collaborate, which I think is driving some of the non-collaboration that we're seeing right now. If everything goes well, I think this is really an amazing way to open up, understand what we often call black boxes, and to do better regulation on it.

If it does not work, I think that the DSA still allows us to start to understand what platforms should provide. The alternatives that might pop up in this that may then have these ideas built in them from the start to really account for researcher data access, without even being a very large online platform, but just because it is such a quintessential democratic function.

Either way, I think that now that it's on the table, like you won't get the genie back into the bottle.

Justin Hendrix:

Peter or Brandi, imagination, 5, 10 years?

Peter Chapman:

Yeah. I think in the last several years, as this access has been restricted, we've seen really a broad coalition emerge around tech accountability that we have not seen before. Speaking from the US context where I'm from, you have parent groups on the front lines of these debates. We recently had an election in Virginia where the Democrats did very well, and reportedly data centers and technology infrastructure was an animating factor in driving people to the polls. We're seeing a broader coalition care about these issues really front and center. I think we've talked about how this is a niche issue, but this is a niche issue that provides infrastructure for a broad range of issues.

I think to the degree to which this community can respond to those interests, but also provide resources and opportunities to expand what we know about these black boxes, I think we're going to see increasing pressure for there to be more disclosure, more information, more scrutiny.

Brandi Geurkink:

I want to talk about my vision for like 20 years into the future.

Justin Hendrix:

Go for it.

Brandi Geurkink:

I want to think more expansively than five years. I think some of what Pete is talking about, about accountability, is a beginning place, and it's maybe one that we'll start to see sooner than 20 years. Which is right now, if you see something that's happening on your own experience on social media. We've seen such harrowing reporting about how children are being impacted, for example, by this huge availability, all of a sudden, of chatbots, right? Maybe it's like you see something happening in your child's own life, in their own experience, and you are able to, thanks to independent research on this topic, understand that you're not alone in that experience. That there is a documented pattern there of harm, and that there is something that could be done about it, something that could be different.

That's accountability. In my view, when people know those kinds of things and they trust that kind of research, they will make different decisions. They'll make different decisions for their own children, for their own communities, their own workplaces, but we will also demand better from our elected lawmakers to help create those safeguards that help us to have a healthier society.

I think that's the accountability piece, but I also think that there's this bigger vision that, to me, has to do with like freedom and has to do with better technology that I think is ultimately the vision. Because it's so wild that the only questions that we can ask right now are largely being asked by companies that they think that they're thinking big. They're not thinking big. They're thinking very small because they're thinking about money.

When you start thinking about how can this experience of this technology be better for my community, my family, when you bring more people into the fold that are not just obsessed with money, you can build great things. You can build things that people actually want to be part of.

We're here at MozFest talking about the early internet, weird, hearkening back to that spirit. I think that there's a link here with like data access and democratizing information because it enables us to ask the questions that if we're just thinking about making more money, we would never ask. I believe that we will begin to learn things and understand things that can actually help us to build better technology that serves people.

When I think about like the 20-year vision, I think about ultimately one in which we are using technology, but it's technology that we want to use. It's technology that we enjoy using and that makes like our lives and the lives of the people around us better.

Justin Hendrix:

Platforms that are perhaps built to be observed or built to engage individuals in the science of studying them, perhaps. We can imagine all of that. Let me ask the three of you to just tell my listeners where they can go to find, well, your reports, your work, quick shout-out to your websites, and your social handles. Peter?

Peter Chapman:

Go ahead to kgi.georgetown.edu, and you'll be able to find the Better Access Report.

Justin Hendrix:

Brandi?

Brandi Geurkink:

Join the coalition at independenttechresearch.org.

LK Seiling:

Check out our work at dsa40collaboratory.eu.

Justin Hendrix:

I look forward to speaking to you all again sometime. Perhaps we'll find another castle in another wonderful European city. Always up for it. Thank you very much, and thanks to Mozilla for allowing us to use these wonderful Shure microphones.

Brandi Geurkink:

Thanks so much.

Peter Chapman:

Thank you, Justin.

LK Seiling:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Perspective
Seeing the Digital Sphere: The Case for Public Platform DataNovember 6, 2025
Analysis
Determining Which Researchers Can Collect Public Data Under the DSAOctober 27, 2025

Topics