Copyright and AI Policy Needs Precision, Not Panic
Luke Hogg, Nick Garcia / Feb 20, 2026Luke Hogg is a senior fellow at the Foundation for American Innovation. Nicholas P. Garcia is a senior policy counsel at Public Knowledge.

New and unprecedented, is it? by Ada Jušić & Eleonora Lima (KCL) / Better Images of AI / CC by 4.0
Last month, nearly 1,000 artists affixed their names to an ad that ran in the New York Times, arguing that generative A.I. is built on “theft — plain and simple.” The campaign—which the Times itself explicitly supports—argues that tech companies are using creators’ work “without authorization” and encourages companies to negotiate licensing deals with rightsholders instead. Artists, creators, and others concerned about creativity have legitimate concerns with AI technology and its potential to disrupt livelihoods. However, the campaign behind this ad is trying to win an urgent policy fight through sleight of hand and reductionist rhetoric.
The “Stealing Isn’t Innovation” ad was organized by the Human Artistry Campaign, a coalition that includes artist and creator groups, as well as copyright maximalists like the Recording Industry Association of America (RIAA) and the Authors Guild. The history of these industry groups demonstrates that their primary goal is not “defending artists.”
These groups have long sought to advance regulatory and enforcement frameworks that first and foremost benefit record labels, publishing companies, and other media giants. History has shown that these favored enforcement models expanded quickly and hit the least powerful first. This, in turn, forced artists and creators into the fold of exploitative media and creative industries to survive. Now, many of the same industry voices are pursuing a similar approach as AI becomes more widely used.
We’ve already seen how this story goes.
The internet once provided a similar technological tipping point, with the potential to radically democratize how people create and share art and information, upending entertainment and media business models. But the same industries behind the Human Artistry Campaign lobbied for new copyright laws that protected the role of intermediaries and continue to cause issues for us today.
Consider the RIAA’s staunch defense of the Digital Millennium Copyright Act (DMCA) despite its use to squash creativity, free expression, and independent creators. In 2007, RIAA member Universal Music sent YouTube a DMCA takedown notice over a 29-second home video of a toddler dancing while a Prince song played faintly in the background. Universal Music fought so fiercely to get this clip taken down that a federal appeals court ruled that rightsholders must consider fair use before firing off takedown notices.
Or take the Lawrence Lessig episode. A record label used the DMCA to remove a Harvard law professor’s lecture on remix culture that used brief clips from the band Phoenix for an educational purpose. Lessig and the Electronic Frontier Foundation sued, and the case ended in a settlement that required changes to the label’s takedown procedures. These examples should clarify that aggressive copyright enforcement is not synonymous with “protecting creators.”
The "Stealing is Innovation" campaign represents the latest iteration of this pattern. While the ad itself doesn't specify a policy or legislative agenda, its organizers have one.
The Human Artistry Campaign has mostly concerned itself with lobbying for the NO FAKES Act, ostensibly aimed at unauthorized “digital replicas” of a person’s voice and likeness. This is a real and immediate challenge, especially as deepfakes become easier to generate and harder to debunk. But the NO FAKES Act would be like the DMCA all over again—but for everything. It would create a staggeringly broad takedown system unconstrained even by copyright ownership. Such a system would undoubtedly be abused to attack authentic content, deepening the epistemic crisis that deepfake regulation ought to mitigate.
And counter-intuitively, the NO FAKES Act is primarily a legal regime that streamlines the entertainment industry’s ability to acquire and exploit rights to a person’s voice and likeness. The bulk of the law is dedicated to creating a new “digital replication right” and, most importantly for the content industries, creating a simple, nationwide set of rules for how they can acquire licenses to make digital replicas. This is not like a right to privacy or a right against impersonation; those are covered by longstanding common law principles.
This is a new property right and companies are eager to buy, sell, and trade in this new currency—our faces, voices, and identities. For some people, like actors or professional musicians, this is already part of their world, but why create the means for 10-year-long licenses for each and every person? This law would create legal machinery that forces artists, creators, celebrities, ordinary people, and even kids into one pipeline designed by and for the incumbent industry players.
Copyright and AI are not a single issue, even when the debate is framed that way. It’s a stack of issues—technical, economic, and legal—that courts are actively sorting through right now. And when we turn complex policy questions into simple slogans, we risk building a new enforcement machine that locks in power for the biggest players and punishes the very creators this campaign claims to defend. Congress is already working to provide new rules of the road, but it must do it with clear definitions, the right policy tools, and in a way that protects everyone—not just powerful corporations.
Consider what the ad refuses to specify: what, exactly, is the theft?
If the alleged wrong is scraping public data, then the relevant questions include consent, access controls, and data security norms. If the alleged wrong is training, the central issues are whether the use is “transformative” under fair use and whether it substitutes for the market of the original work. If the alleged wrong lies in outputs, then the debate shifts again: How often do models reproduce content in a way that infringes copyright? What remedies actually stop that without banning legitimate tools? And how do we preserve longstanding rules about creative influences, genre, style, summarization and other building blocks of creativity and expression?
Courts are already issuing substantive rulings about AI and copyright and those decisions are complex. For example, in litigation involving Anthropic, a federal judge found that training a model on copyrighted books qualified as fair use while still holding the company to account for obtaining millions of books from pirated sources. Here, the law is attempting to separate how the data was obtained from how it was used; just like 'how you got a book' and 'did you learn something from it' are two separate questions. That is precisely the nuance missing from the Human Artistry Campaign’s accusations of “theft — plain and simple.”
Those distinctions matter because lawmakers are making rules for an entire ecosystem. If Congress writes legislation based on the idea that training AI systems is akin to theft, independent artists are unlikely to benefit. The likely winners will be the largest companies on both sides: major rightsholder conglomerates with extensive catalogs of content to license, and major tech firms with the resources to negotiate deals. Everyone else—small publishers, indie record labels, tech startups, researchers, and working creators—will be pushed to the margins. If you are worried about AI strangling creativity and being dominated by powerful interests, this is the future that could take shape.
So, what would a better, and equally urgent, approach look like?
It would start by dropping the oversimplification. If the concern is output infringement, enforce against outputs that actually infringe and build fast, fair dispute processes that can’t be gamed like the DMCA often is. If the concern is consent and compensation, pursue scalable transparency and leave the door open to all forms of compensation, but avoid mandates that turn the entire internet into a toll road. And if the concern is deepfake identity abuse, write targeted solutions that protect everyone instead of building the next legal machine for monetizing human identity.
America can protect artists and still make room for new technology. But we cannot do it with simple slogans posing as policy, particularly when they risk recreating the same flawed systems we have now.
Authors

