Home

Donate
Perspective

Generative AI is Neither Too Unprecedented Nor Too New to Regulate

Sarah Barrington / Nov 14, 2025

This perspective is part of a series of provocations published on Tech Policy Press in advance of a symposium at the University of Pittsburgh's Communication Technology Research Lab (CTRL) on threats to knowledge and US democracy.

OpenAI CEO Sam Altman speaks during the US Federal Reserve Board of Governors' "Integrated Review of the Capital Framework for Large Banks Conference" at the Federal Reserve in Washington, DC, on July 22, 2025. (Photo by MANDEL NGAN/AFP via Getty Images)

Last month, OpenAI unleashed Sora 2, the latest in a surge of generative-AI technologies empowering users to create perceptually realistic media from a single image or prompt. Sora 2, in particular, enables anyone with the app to hijack another person’s digital identity at the click of a button through its “cameo” feature, generating videos that falsify actions, speech, and entire events—so-called deepfakes. With weak anti-impersonation measures, thousands of deepfakes of real people have been created both consensually and non-consensually, and accompanying distribution safeguards have already fallen short against the breadth of harmful content out there.

The current moment

While the technology itself is undeniably advancing at a remarkable pace, the harms associated with generative AI and deepfakes (AI-generated content) are not new. Over just five years since the first open-source tools began to emerge, these systems have been used to supercharge disinformation, fraud, and a range of societal harms—from the ballot box to the boardroom.

While the realism and accessibility of AI-generated content are undeniably unprecedented, the underlying harms are not. The rapid dissemination of false information to millions of people, the polarization and radicalization of communities, and the entrenched patterns of online harassment, data extraction and labor exploitation have existed since the dawn of the technology industry, if not before. Indeed, they are packaged and deployed through the same software products and systems that have defined the industry since its inception: social-media platforms, algorithmic recommendation systems, and the web and mobile applications that mediate nearly all digital interaction. What is new is not the nature of these harms, but their scale and speed.

How unprecedented is ‘AI’?

Yet, policymakers have so far met this moment with confusion, grappling with the technology’s complexity, uncertainty about its capabilities, and concerns over free speech. The result is a patchwork of state-level bills and reliance on non-binding voluntary commitments. Because generative AI greatly exceeds prior expectations of how human-appearing machines can behave, and because its capabilities are opaque even to many of its creators, it is often folded into a broader range of systems from basic automation to advanced image- and video-generation models.

In particular, the term “AI” is often conflated with “Artificial General Intelligence” (AGI)a futuristic vision with inconsistent definitions but largely centering on a single, truly intelligent, and autonomous system. While several technology companies are working to achieve this goal, this is not a reality at present, and waiting until it materializes is an excuse for regulatory inaction. The reality is that AI in its present form largely augments human capabilities across many already-regulated industries, and should therefore be governed by the same product-liability and safety frameworks that apply to other high-risk technologies. While scholars have begun calling for such action, industry and policy have yet to follow suit.

The playbook: three tenets of policy paralysis

What is holding policymakers back? In short, the same playbook the technology industry has used for the last twenty years. With eroding institutions and a reactive approach to policymaking, we risk paralysis by the myth of the “unprecedented”: a narrative strategically cultivated by technology monopolies to evade accountability. Policymakers are manipulated toward inaction in three ways:

Firstly, by evading product liability through claims of complexity and breadth.

Continually, we observe efforts by technology and infrastructure giants to lobby at an exceptional scale and dilute proposed AI policy on the basis that AI is too broad to regulate uniformly . Yet, the present reality is that we are confronting specific AI-based technologies: primarily LLMs, VLMs, and text/image-to-video models, each of which may be broad but is applied to distinct industrial and public use cases. It is crucial that we break AI down into its constituent technical parts and address each through an industry-based approach in which AI tools used (for example) in the nuclear industry could be governed by the same industry standards dictated by nuclear policy, and drug-discovery AI governed by pharmaceutical regulation. Instead of attempting the Herculean feat of governing such diverse domains under the same regulations—or giving up on regulation at all—we should pursue a more bespoke approach.

Secondly, claiming that regulation beyond voluntary commitments will stifle innovation.

Large technology companies developing various types of AI models demonstrated this during the Biden-Harris Executive Order, which relied on voluntary commitments from industry for measures such as watermarking AI-generated content. The “stifling innovation” argument is often paired with appeals to national security and calls for “staying ahead” of geopolitical adversaries; often without recognizing that much national security technology has historically been developed within government rather than the private sector. Policymakers have already proved how effective targeted, legally-binding regulation can be—for example, the FCC’s 2024 ban on AI-powered robocalls. Similarly, the Take It Down Act , while troubled by overbreadth and misuse risks, recognizes both authentic and AI-generated Non-Consensual Intimate Imagery (NCII) as harmful. The act demonstrates another example of how existing regulatory principles in consumer- and civilian-protection laws can expand definitions to encompass AI content and tools.

Finally, by steering the discussion around content provenance and authenticity toward supposed concerns about free expression.

Making this association helps technology companies and social media platforms to justify obvious steps backward, such as Meta’s removal of fact-checking in January. Confusion around this topic further enables tech giants to proliferate AI-generated content while obviously harmful media goes unchecked and is circulated to millions online.

Policymakers could consider centering the discussion around consent and intent of the content, topics which are often neglected from moderation discussions that frequently center on corporate free-speech defenses rather than true harms. These ideas are not as abstract as they may seem. Consent can be straightforward: if someone claims to be the person depicted in a synthetic image or video and requests its removal, that right should be enforceable (as is reflected in the Take It Down Act). Intent concerns the purpose behind a piece of content: a company using AI imagery in advertising should be treated differently from a political deepfake purporting to show a candidate making false statements. In both cases, transparency is key: users should know whether content is synthetic and why it was created, yet, recent investigations have found that major platforms are still failing to meet even this basic threshold.

New technology, same old problems

While some elements of AI are genuinely new, the harms it enables are evolutions of those we have seen before. When we get specific about AI, it becomes clear that the real danger is not one monolith of “AI” itself, but our unwillingness to deconstruct the technology into its constituent parts and recognize these systems as software products that require standards, testing and liability, just like every other high-impact industry. Rather than holding the companies and their leadership responsible, we blame the technology itself. It is time to recognize the myth of the “unprecedented” and its playbook, and hold tech accountable.

Authors

Sarah Barrington
Sarah Barrington is a Ph.D. Student in Information & Computer Science at the University of California, Berkeley, where she specializes in generative AI & deepfake analysis. She is part of Professor Hany Farid’s digital forensics lab and has engaged globally with policy makers and industry leaders. P...

Related

Perspective
The United States is on the Cusp of a Digital Dark AgeNovember 10, 2025
Analysis
Synthetic Media Policy: Provenance and Authentication — Expert Insights and QuestionsMay 2, 2025

Topics