Home

Donate

New Tools Move Faster Than New Rules: Bridging Science and Law for a Better Technology Ecosystem

Jed Miller / Dec 13, 2024

Yutong Liu & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0

AI and algorithmic tools have been installed in our lives and habits faster than regulation, liability, or public awareness safety nets could be put in place. From widely known examples like police facial recognition and deep-fake videos to less visible uses of AI in health care, hiring, and content moderation systems, the new tools seem inescapable even while their full implications remain mysterious.

It’s hardly unprecedented for new technologies to spread faster than new rules. Almost seventy years passed between the release of the Ford Model T and the creation of a federal speed limit in the United States. However, the speed of AI adoption and the frenzy of the AI debate make the need for coherent protections based on reliable research especially urgent.

Researchers and regulators are racing to understand the impacts of these new tools in order to set the right boundaries. In September, the United Nations announced a new blueprint for AI governance, the result of a one-year consultation by an interdisciplinary group of experts. Regulatory efforts are underway at the national and regional levels in the US, the EU, and around the globe.

The pace of rule-making is usually slow, and in the era of AI hype, rule-makers face the combined challenge of multiple jurisdictions, lobbying pressure from Silicon Valley, and conflicting AI narratives ungrounded by any agreed norms. But given the accelerated adoption of AI, scientists and rule-makers are seeking to accelerate their own traditionally deliberate processes. The question of how to measure the impact of digital tools has never been less “academic.”

The work of turning technology principles into rules and rules into practices must be as interdisciplinary as the high-level dialogues among government, civil society, and industry have been. To support that goal, the Citizens and Technology (CAT) Lab at Cornell University and the Knight First Amendment Institute at Columbia University held a closed-door workshop early this year to strengthen cooperation between technology researchers and public interest litigators.

CAT Lab and its partners believe that emerging AI practices need a stronger grounding in computational science—and in the experience of communities impacted by tech design and tech policy. The workshop on “Platforms, Causality, and the Law” has sparked new conversations between researchers, litigators, and advocates.

In the insights from the Workshop, we can see several actionable lessons about how to build and maintain the bridges between researchers and legal advocates, between science and law. Based on conversations with participants and presenters, this note includes three key lessons from the workshop regarding the mismatches in pace and sequence between science and litigation and the legitimate (and necessary) cultural differences that make researchers and lawyers think differently about concepts of causality and evidence. Although these communities of practice diverge in their approaches, they share an impatience for stronger interdisciplinary coordination.

“If we can harmonize the provision of reliable, statistical evidence between science and law,” says CAT Lab founder Nathan Matias, “it will help to build stronger tech accountability cases and to train future litigators, researchers, and community scientists to develop usable evidence.”

Like regulators, lawyers and judges rely on scientific evidence to guide their deliberation. And because research and law themselves move at different paces and with different vocabularies, practitioners in both sectors see a growing need to bridge their disciplines, so better coordination can lead to better technology tools and technology rules.

Lesson One: Research and litigation have different timing.

Many of the tensions between scientific practice and legal practice are more about timing than about conflicting analyses. Litigation is time-bound, and policy advocacy is, as one researcher put it, “opportunistic.” But researchers see themselves as more than expert witnesses. They seek to contribute at every stage of the legal process—from discovery through judgment and remedies. So legal advocates are trying to learn more about how science investigates causality, how harms are measured, and how long scientific measurement can take. By sharing practices, scientists and litigators can get in sync.

“You feel like they’re asking you to get out ahead of your research. Sometimes you have to say, ‘No, we have to wait for this project to be done.’” — Jacob Metcalf, Data & Society

To embed science deeper in the “stack” of technology governance and design is a challenge. Science establishes its facts at a deliberate pace, usually erring on the side of uncertainty as it attaches cause and effect. Rule-making—especially in the legal process—demands a minimum level of proof in order for evidence to determine accountability.

Platforms, Causality and the Law: Finding Shared Vocabularies

CAT Lab and the Knight First Amendment Institute developed the workshop to strengthen the links between expert science and legal advocacy and to learn how the necessary differences between the two disciplines can be marshaled as a constructive tension, instead of a barrier, in the quest for more responsible technology decisions.

The workshop took place in January 2024 amid the tumultuous debates sparked by the launch of ChatGPT-3. Lawsuits and scientific studies move at a different pace than startup investments, government declarations, or media frenzies, but the outcomes of litigation and research are still foundational pieces of technology governance. So workshop participants gathered eager to learn from each other at a moment of historically high stakes.

The event combined the show-and-tell of academics and litigators presenting their fields to each other with formative efforts to translate each other’s approaches into shared needs, shared vocabularies, and, hopefully, a shared vision for stronger cooperation.

Three tracks focused on the impacts of technology and surveillance on freedom of expression; on mental health, especially for young people; and the role and remedies for algorithmic discrimination, all seen through the lens of causal connections between tools and harms and what it means to demonstrate or prove those connections. As a research hub, CAT Lab works to ensure that the communities that experience technology’s impacts are always at the center of how those impacts are measured.

The workshop’s title reflects the importance of causality in assigning liability for harms related to online systems. To help illustrate their work and the relevant challenges, participating litigators talked about the standards they need to meet to show evidence of harms and to attribute those outcomes to software products or online platforms.

If an online comment is racist, for example, what is the burden on a plaintiff’s attorney to prove the system that distributed the comment—or failed to flag or remove it—is the “cause” of the resulting harm? When moderation systems bombard human moderators with violent imagery, what research on trauma will be the most persuasive in legal advocacy for tech workers?

Lesson Two: Science and law need a common framework for assessing technology’s impact.

Litigation is structured to determine accountability, and litigators naturally enlist science in the service of attributing causality and blame. But “evidence” has different connotations and burdens in a research paper than in a courtroom. Strong evidence of association might be considered a “win” in court, but for an academic, results that undermine the original hypothesis can be equally compelling. Scientific knowledge increases even when “proof” recedes. By embracing the differences between evidence that supports a scientific hypothesis and evidence that holds up in a court, practitioners can create shared vocabularies about technology’s impact, accountability, and governance.

“It’s a clash of cultures. It comes down to the rigor that each discipline assigns to the terms, and what methodologies they use to make determinations.” — Meetali Jain, Tech Justice Law Project

For academics, the role of any research finding is rarely to provide definitive proof, especially in the non-mathematical sciences. Researchers hypothesize about associations, for instance, between photo-sharing apps and body dysmorphia, and then use the tools of their disciplines—interviews, surveys, prior studies, and data analysis—to demonstrate to the best of their abilities if a hypothesis is affirmed or undermined by their findings.

Workshop contributor Professor Amy Orben says she struggles with how her research findings should best be used as evidence when “my opinion might still move” after further research. Lawyer and workshop co-designer Meetali Jain acknowledges that sometimes litigators “don’t want nuance, we want the outcome.” Jain’s organization, the Tech Justice Law Project, was founded to help bridge gaps between legal experts, advocates, and technologists.

“You don’t want to base your case theory on something that isn’t sound,” says participant Dean Kawamoto, who represented Seattle schools in a pioneering mental health lawsuit against social media companies. Kawamoto says that while attorneys and academics have “different standards and ways of analysis” for determining causation, “it’s a challenge but not necessarily a problem. Academics are critical to these cases.”

Olivier Sylvain, a senior fellow at the Knight First Amendment Institute and former senior advisor to the FTC, says that legal terms such as “disparate impact” are so specific that experts in research and policy will benefit when you “spell things out, without legal mumbo jumbo.” Experts need “interstices where there is an opportunity for creating vocabularies” across disciplines, he says.

For workshop co-host the Knight First Amendment Institute (KFAI), the event was part of a larger mission to connect research and tech policy. KFAI research director Katy Glenn Bass says bridging opportunities for researchers remain too rare, and the institute works virtually and in person to provide “a space to get their work out beyond their ‘home discipline.’ Researchers working on a given topic want a better sense of which litigators are working on it,” she says, “so they know where to share their findings as they emerge.”

Meetali Jain says the workshop helped to connect “people who don't necessarily talk, but who are so critical to supporting each other’s issues.” Meetali sees the Tech Justice Law Project as “a glue” keeping tech expertise and legal expertise in closer contact as part of their ongoing legal advocacy. “There is too little opportunity to meaningfully come together” beyond the episodic needs for expertise in individual legal cases, she says.

Academics remain too “siloed” from the work of tech policy and tech accountability, says Brandi Geurkink, a partner in the workshop and the leader of the Coalition for Independent Technology Research. The coalition—founded in 2022 by researchers including CAT Lab’s Nathan Matias—acts as an advocate and de facto guild for participating technology researchers, as their work is increasingly strained by the secretive and unaccountable behavior of tech companies.

It’s also not enough to hold occasional meetups. In any one-time workshop, Katy Glenn Bass says, “I would not have time to think about what all the next steps are.” Workshop participants wanted more time “to figure out what counterparts need, what their struggles are, how to make litigation and research work better together,” she says.

Litigator Felicia Craick asks, “Could lawyers identify some of the studies that they would love to see done, not for current cases but for future cases?” Another advocate wished for more brainstorming time with researchers to think about technology harms “proactively” and build beyond the framework of seeking legal recourse for past harms.

Participants were unanimous that better grounding in science can lead to better legal outcomes. Whether or not a lawsuit results in a monetary award, science can also inform judges and juries about the best way to prevent future harms. “Our investment in continual collaboration matters,” Nathan Matias says, “because we can’t keep iterating our knowledge at the pace of losing cases. That’s a terrible and slow way to learn.”

Provisioning Networks of Practitioners

As groups like Tech Law Justice Project and CITR widen their advocacy, and the AI explosion spurs new investments in cross-disciplinary cooperation, CAT Lab and its partners envision an emerging network that is less ephemeral than a listserv or a calendar of convenings. What one might merely describe as an ecosystem can more fully become one, sustaining its own momentum. Along the way, that emerging network will amass a new pool of timely and mutually accessible resources.

Lesson Three: A space between established disciplines is hard to maintain, but everyone wants one.

The workshop’s leaders and participants say that widening the community of practice can galvanize their ability to enlist science in the development and governance of technology. They lamented the scarcity of such encounters between fields and the lack of “third places” where experts can look beyond their current projects.

“We’re working on the same issues, with a lot of the same analyses, targeting the same companies. People working on the same thing need to be in the same room and speaking the same language.” — Brandi Geurkink, Coalition for Independent Technology Research

Workshop participants say that comparing challenges gave them a clearer understanding of the shared resources that would help researchers and litigators be more effective—in collaboration and in their respective efforts.

Olivier Sylvain says his colleagues still need “exposure to research one didn’t know about.” Participants recognized that legal cases about technology do not always surface into their view–not only due to confidentiality, but because they lack a standing resource for interdisciplinary knowledge sharing. And it would be impossible to maintain one centralized repository of all the new and forthcoming research on the impact of technology and the experiences of impacted communities.

By cultivating and sharing in the upkeep of a network, CAT Lab and its peer groups are building a community of practice and a deeper base of knowledge than any one node of the network could generate. It’s this comparison of resources across multiple disciplines that can reveal the gaps in knowledge and the best directions for new research, new litigation, and new policy advocacy.

“Without clarity on the kinds of analyses that could prevail with regulators and courts,” Matias says, “it's difficult for researchers to know what kind of evidence to collect or even what kinds of statistical advances are needed.”

In that sense, this ecosystem of practitioners can serve not just as a bazaar or a repository but as a distributed feedback loop, one that shapes an emergent agenda for stronger technology governance and more effective inquiry.

Safer Tech Decisions Require a Sustained Conversation

CAT Lab is developing methodologies and partnerships that seek to set a new norm for industry-independent research, cooperation between scientists and legal experts, and stronger linkages between technology decision-makers and the citizens, users and online moderators whose experiences reflect the real impact of technology decisions.

By working in common cause, researchers and attorneys can advocate more forcefully for a more responsible and accountable technology sector, even as we continue to improve our own shared knowledge and shared vocabularies.

In moments of technology acceleration and public risk, leaders and advocates will enlist science and policy—and sometimes communities—to help set new standards of safety and accountability. It’s almost as if we need to be startled into stronger cooperation and more conscientious action. But interdisciplinary bridges do not need to be tools exclusively for crisis and opportunity. They should be tools of default.

The author would like to thank Elizabeth Eagen, Nathan Matias, and Maia Woluchem for their insights and support in preparation of this post.

Authors

Jed Miller
Jed Miller is a writer and organizational strategist focused on the informed, inclusive governance of technology in the public interest. He is the founder of 3 Bridges, which seeks to build shared understanding of digital tools among public officials, local communities, technologists, and philanthro...

Topics