To Have Democracy, We Must Contest Data
Emily Tucker / Oct 14, 2025In her opening remarks at the UN General Assembly last month, Nobel Prize laureate Maria Ressa spoke about the takeover of information ecosystems by global technology companies and the disastrous impact on democracy worldwide. She mentioned the “Global Call for AI Redlines,” a statement signed by over 200 prominent individuals, including Ressa herself, urging governments that an “international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks… ensuring that all advanced AI providers are accountable to shared thresholds.”
It was heartening to hear a person with Ressa’s moral authority issue a call, on such a big stage, for a coordinated effort to prevent technology companies from undermining political freedom and public safety. But the approach being advanced by the signatories of the “AI Redlines” letter she referenced is the wrong one. We don’t need redlines for “AI,” we need redlines for data. We need redlines that would put an end to many of the data practices of tech companies that sell “AI,” and we need those redlines to apply regardless of the types of products they are using data to create.
I am not talking about a data governance regime focused on protecting individuals from privacy violations and other infringements of individual rights, which is at the heart of the GDPR in Europe and of most data governance proposals made in democratic contexts. I am referring to a data governance regime to protect political communities from the corporate power grab currently underway. Redlines that advance that goal would include restrictions on particular practices (like collecting certain types of data, stealing data, and selling data) that a technology company of any size might engage in. But what we need most urgently are redlines that set hard limits on the scale of corporate data collection and on the consumption of natural resources for data processing. If governments do not set and enforce such limits soon, they will quickly lose the power to set or enforce any meaningful limits on tech companies at all.
Avoiding the problem of data
Even though data is the most powerful lever governments have to shape technology in the public interest, the possibility of pulling that lever rarely arises in policy discussions. The primary reason is that telling the executives who run the world’s largest technology companies that they can no longer amass unlimited amounts of data would be extremely politically difficult. At the first sign of legislation that might impose real limits on data practices, the tech industry lobby would descend with full force to stop it (as we have seen them do over and over again). And any bill that did pass would trigger an immediate and crushing wave of costly litigation, likely concluding in an appeal to the Supreme Court to finally explicitly sanctify limitless profit maximization as a constitutionally protected corporate speech right.
Compounding the difficulty, many businesses (and, increasingly, entire industries) are dependent on products owned by corporations that engage in data collection on the largest possible scale. Strong regulation of data practices might mean some of those products no longer exist, and adapting to that new reality could be expensive and time-consuming for everyone. The enforcement of data redlines would also likely impact the availability or functioning of various apps and platforms for communication, commerce, and entertainment that so many of us have integrated into our daily lives—almost certainly resulting in widespread backlash. These are just a few of the foreseeable consequences that would flow from establishing data redlines.
It is not surprising, then, that policymakers would prefer a strategy for regulating the digital world that avoids the problem of data. Yet, recognizing the consequences we would face for placing limits on corporate data accumulation should make it clear how urgently such limits are needed. By capturing so much of our social and economic infrastructure, tech companies have made it difficult for us to take up the question, “ Is this what’s best for us?” They have made it hard to believe that asking such a question is even within our power. The time has come to insist that it is.
The “AI governance” trap
The desire to avoid addressing the fundamental problem of data is helping to drive the proliferation of “AI governance” bills at every level of government around the world. Unfortunately, without redlines on the data practices of megacorporations, “AI governance” is a project that is destined to fail. The most basic reason that “AI redlines” and “AI governance” won’t work to prevent “AI” harms is that the tech executives who decide what gets called “AI” are the very people whose activities need to be regulated. If you build law or policy around a term that is controlled by a certain group of people, the one thing you can be sure the law won’t do is meaningfully constrain the things those people do.
This is why the “AI ethics” fad, which has consumed so much of policymaking, civil society advocacy, and philanthropy over the last several years, is grasping at the wrong end of the stick. It’s true that the same data training methods, marketed as AI, can be used to model the evolution of cancer cells, the behavior of online shoppers, and the characteristics of the physical environment in a conflict zone. But it makes no more sense to treat cancer screening, targeted advertising, and autonomous weapons as the same kind of thing simply because they all use similar computational techniques than it would to treat MRI machines, vacuum cleaners, and credit cards as the same kind of thing because they all use magnets.
The right way to address the emptiness and indeterminacy of “AI” is not for governments to involve themselves in a tortured and resource-intensive definition-making project to try to give real substance to a marketing term. If we discovered that companies selling “athleisure wear” were lacing their clothing with flesh-eating bacteria and enslaving factory workers, we would not respond with a debate about “athleisure pros and cons” and then attempt to define and regulate the uses of “athleisure.” If we did, all the implicated companies would simply scrub “athleisure” from their websites, rebrand their products as “loungewear” or “day jammies,” and compel the expenditure of millions of dollars to litigate the nuances of different types of sweatpants.
Calling for “AI redlines” preemptively concedes the ground of legal and political contestation by allowing the tech companies to set the terms of the argument. It is not surprising, then, that so many of the signatories of the “AI Redlines” letter are high-level employees at AI companies. From the perspective of a corporation like OpenAI, governance mechanisms that center on the technology are useful because they help to preempt governance mechanisms that would target the corporate practices on which the technology depends. Bills like SB53, which just passed in California, may ultimately do more harm than good by building a massive bureaucracy to entrench and legitimize social and economic dependence on corporate data products.
That’s not to say that some of the dangers worrying the authors of the “AI Redlines” statement aren’t real. It’s just that they aren’t dangers that arise from “AI,” they are dangers that arise from the things that technology companies are doing in the name of “AI.” Take, for example, this proposed redline from an explainer published by The Future Society, one of the three organizations that coordinated the global redlines statement: “Ban AI systems that are capable of taking actions aimed at unduly increasing their own influence, access to resources, or control over people or systems.”
This sounds entirely reasonable. Such systems are incompatible with democracy and should be banned. But shouldn’t they be banned regardless of whether or not they are categorized as “AI”? After all, “Taking actions aimed at unduly increasing their own influence, access to resources, or control over people or systems” is a fair way to describe archetypal Silicon Valley business practices.
Data redlines for democracy
The way to prevent the terrifying outcomes that Ressa and other signatories are warning of is to regard “AI” not as a problem of technology but as a problem of political economy. To do that, we need to stop talking about “AI” and start talking about data. Not data as “personal information,” but data as a vehicle for aggregation of wealth and power. This requires facing the unfortunate truth that the current reach of datafication and digital infrastructure is already incompatible with the democratic self-governance of political communities. Economic liberty is not possible when a handful of tech executives control the organization and allocation of capital; freedom of speech and of conscience are not possible in a comprehensively surveilled public square; and justice is not possible when we are all data subjects.
We have been unwilling to admit this because the work of dismantling data dependencies is going to be tedious, unglamorous, and will involve sacrificing various privileges and conveniences to which we are all currently in thrall. In the current US political climate, it is also work that is likely to be personally risky for those who undertake it. But if we don’t start resisting the corporate power grab at the level of data, governments (and the political communities they are supposed to serve) will have no power to enforce whatever “AI redlines” they manage to enact.
Why are so many AI companies pushing for investment in “digital public infrastructure?” Because billionaires understand that if they own and control the switches that keep hospitals and schools and markets open, transportation systems running, phones and computers connected, the notion of legal compliance will become nothing more than a useful fiction. When governments can no longer provide the public with essential services without ongoing access to digital products that are the intellectual property of megacorporations, that’s a degree of leverage sufficient to turn any redline into a punchline.
If the bad news is that nobody wants to do what needs to be done to prevent tech oligarchy, the good news is that it isn’t that hard to see what that is. We need to establish local, national and regional governance structures that make it extremely difficult to use data for political and economic capture.
Data is not a natural resource like oil or rare earth minerals. Data is a human artifact that can take many forms and be used in a wide variety of ways for a variety of human purposes. The fact that we currently think of data as essentially a form of currency is due to characteristics of our legal and economic systems, not due to any characteristics of data itself. Even if we cannot (and should not) arrive at an international consensus on what kind of economic and legal systems are best, we can (and should) cooperate to prevent the use of data to capture and manipulate our various legal and economic systems.
In order to do that, we need to agree on data redlines in at least three categories:
(1) We need redlines that put limits on who can create and collect which kinds of data and for what purposes, with particular limitations on data production and aggregation by for-profit corporations. These redlines need to create presumptions against data extraction around certain kinds of sensitive individual and collective human activities.
(2) We need redlines limiting the amounts of specific types of data that can be amassed by any individual or entity, including both governments and corporations, and limiting the consumption of natural resources for the processing of data.
(3) We need redlines that prohibit fundamental dependencies on proprietary data products within sectors crucial to the public interest (such as healthcare, education and media).
Getting policymakers to take redlines like this seriously will require a sustained, well-resourced, internationally distributed campaign led by civil society actors and grassroots organizations demanding the right to contest data. It will be a long and difficult fight, but without it, we risk giving up our ability to fight for all the other things that are necessary for our individual and common good. We can have mass surveillance, or we can have political self-determination. We can’t have both.
Authors
