Derek Slater is a founding partner of Proteus Strategies, a boutique tech policy strategy and advocacy firm. He has worked actively in online media and copyright policy issues for over two decades.
Over the last year, generative artificial intelligence (AI) tools have become increasingly accessible to people, allowing people to create different types of media in new ways. It’s also catalyzed new debates about the future of creativity and copyright. Some artists and content creators worry that these tools will not only generate works that threaten their livelihoods, but also that they were developed in ways that unfairly exploit their original works. The debate thus tends to take on a frame familiar to discussions of copyright and digital technology for the last several decades – it pits artists and content creators versus technology developers.
That frame is far too limited. After all, copyright’s purpose is to promote creativity and knowledge sharing for the public’s benefit, which demands a consideration of technology’s users as well. More specifically, many artists and content creators are users and beneficiaries of these tools, and the way these tools are regulated will impact them, too.
The fact that these creator-users have interests at stake too doesn’t dictate a specific policy outcome in all cases, and, indeed, the umbrella term “generative AI” hides that there are a wide variety of tools and uses. But their interests can and should inform how we think about copyright policy in this context, and that’s what I elaborate on below.
Creator-Users of New Technology in Historical Perspective
No one knows exactly what the utility and impact of generative AI will be. Some project incredible transformation of how all creativity occurs, for better and for ill. Some critics of the technology suggest in one breath that it will “replace human writers,” while in the next breath note that “the threat of human replacement is perhaps overblown.”
This is far from the first time that a new technology has generated both opportunities and controversy among creators. Consider the following examples:
- Some technologies are initially seen as irrelevant to creative expression, and their benefits to creators are only later revealed. The camcorder was invented in 1966 and initially marketed simply as a way to film family events for posterity. But camcorders eventually were used by people to create new types of video programs, outside the system of the existing broadcast television networks and film industry. For instance, in the underground comedy community of New York City, this tool empowered Jon Belushi, Bill Murray, and other comedy greats to create their first short films.
- Some technologies unleash creativity that is initially seen as ‘low art.’ Hip-hop and rap, driven by tools to sample from existing recorded music, were derided as low quality, and some claimed the practice would lead to the decline of music. Of course, these genres only grew in popularity, revenue, and artistic esteem.
- More recently, consider the rise of “user generated content” (UGC) which like the above examples was initially looked at as just “dogs on skateboards” and irrelevant ephemera a “sideshow” to its use for piracy of existing copyrighted works. But over time new artists and content creators were discovered and thrived thanks to online platforms, and existing creators and industries have invested in and benefited from these tools as well.
In these and other examples, we see a familiar cycle – new technology democratizes creativity and enables a variety of new types of uses; initially, it’s seen at worst as a threat to art and artists, and, at best, marginal; and over time, it helps foster new forms of creativity and opportunities for creators to find audiences and make money.
Examples of Creator-Users Today – A Shallow-Dive Into Nascent Waters
To some, most content generated through AI today could seem to be just, to put it bluntly, crap, or, at least, not indicative of craft, artistic or otherwise – a flood of images, text, and other media generated through people tying brief inputs into a machine and then publishing them to the world without much thought. But if you take even a shallow dive into the nascent waters of generative AI, the picture gets more complex, with generative AI acting as an assistant to a wide array of processes to create and share knowledge with more significant human effort.
- Professional art: If you look at artist Kris Kashtanova’s tutorials, it’s apparent that generative AI can involve far more of a craft than simply clicking a button. They are one of many artists who are incorporating generative AI into commercial works. While image generators are among the most widely used tools, writers have used tools like Jasper.ai in published books, helping with character development, providing useful summaries, or simply helping with writer’s block. Storytime is aiming to make it easier for any fiction writer to illustrate fully animated visual novels based on their words. (Kris’ experience also echoes the historical examples above in that they have received consistent harassment simply for exploring this medium.)
- Education: Kinnu.xyz is building a platform of learning pathways, incorporating content generated through AI with human review and editing.
- Business productivity: Companies and organizations are using AI as a productivity tool, summarizing meetings and reports.
- Amateurs and communities: As with UGC, most of the content generated may be amateur works. To some, this may make it seem like ‘low art,’ but there’s no accounting for taste, and empowering people with new ways to express themselves can be a benefit in its own right. In particular, it’s worth looking at how communities are forming and bonding around use of these tools. Spend time in the Discord chat server for the image generator tool Midjourney, for instance, and you’ll quickly see people cheering each other on and supporting one another in honing their expression.
Copyright Policy and Generative AI
The core copyright concern with generative AI is that many tools are trained on massive datasets that contain copyrighted works, where this training has not been specifically licensed. By keeping the interests of creator-users in mind, we can better arbitrate between what copyright should allow and prohibit.
No creator develops their craft in a vacuum. Everyone learns by engaging with past works. You might walk around a museum and read painting manuals to learn how to create your own Surrealist art. Or you might watch classic horror movies in order to create your own take on the genre. Copyright has always permitted this sort of behavior, so long as the resulting creative output doesn’t copy directly from past expression or create something substantially similar to preexisting expressions.
As scholars Mark Lemley and Bryan Casey persuasively argue in their paper Fair Learning, we should generally permit generative AI tools that in effect learn from past works in ways that facilitate creation of new, distinct ones. While some claim that generative AI systems are simply engines for ‘collage’ or ‘plagiarism,’ copying previous expressions into new works, this isn’t an accurate description of how most tools work. Instead, generative AI extracts information that then is used to inform generation of new material; for instance, by looking at many pictures of dogs, it can extract information about what dogs look like, and can then help a user draw dogs, or by looking at many pieces of art labeled as Surrealist, it can help a user create new works in the style of Surrealism. In effect, these are tools that aid new creators in their learning and building on past works.
That doesn’t mean all generative AI tools should necessarily be permissible in every circumstance. Lemley and Casey, as well as legal scholar Mehtab Khan and AI researcher Alex Hanna in their more critical take on these tools, note a tougher call would be a system trained on a particular singer’s work in order to specifically generate songs like hers. While style is not generally protected by copyright, the facts of each case will matter. The key question here is whether the tools are designed to substitute for particular creative expressions, rather than enabling new expressions and building on pre-existing ideas, genres, and concepts.
Regardless of how generative AI is trained, not all outputs of AI tools should be permissible under copyright either. One can use a general purpose tool like Midjourney to create a work that is substantially similar to an existing copyright work, for instance. However, that shouldn’t per se mean the tool itself is infringing, as opposed to the user of the tool; building on existing legal approaches, liability for the tool will depend on whether and how the tool developer or service provider knows about, contributes to, controls, and financially benefits directly from infringement.
Some distinguish the way people learn from generative AI’s training by the fact that generative AI must make a copy of existing works and thus is per se “theft” or “stealing,” even where the training data itself was lawfully accessed and publicly available. Such a rule would be far too sweeping. Consider, for instance, how the coronavirus was detected in its early stages via large-scale machine analysis of news articles, and similar sorts of analysis are key to vaccine research. In fact, in a digital world, engaging with any work – including simply reading a webpage or watching a video – means making a copy, so even an individual’s process of learning will always involve some intermediate copying. Simply saying copying is equivalent to “theft” is not a useful shortcut, and copyright law has never prohibited all copying.
Still others claim that a broader application of copyright law is necessary because, even if the outputs of generative AI are not substantially similar to existing works, they still will threaten the livelihoods of existing workers. For instance, if everyone can generate their own graphic designs, then graphic designers may no longer be able to charge the same rates.
The impact on labor markets is a real concern, but it’s also important to recognize that foreclosing generative AI also has an impact on creator-users of those tools. Limiting these tools on that basis is essentially favoring one existing form of creativity and creators over others, and we might instead look for policy tools that help support both.
Similarly, one of the key concerns around AI is that it will reinforce market concentration, supporting the power of “Big Tech.” For instance, researcher Jenna Burrell calls ChatGPT “the work of millions accruing to a few capitalist owners who pay nothing at all for that labor.”
Here, too, the worry about AI reinforcing existing tech market structure is legitimate; however, extending copyright to further limit training on copyrighted works is unlikely to help and may even hurt creators of all stripes. In a post examining AI art generation and its impact on markets, author Cory Doctorow and policy advocate Katherine Trendacosta imagine a world in which all AI training on copyrighted works must be licensed, and explain how this would be a “pyrrhic victory” for artists. Media markets are also highly concentrated (in part due to copyright itself), and the licensing fees would accrue to those corporations, not to artists. Moreover, only those tech companies with substantial resources would be able to afford such licenses, reinforcing concentration in that sector. The solution to monopoly concerns in tech is not, then, to beef up the government-granted monopoly of copyright, but rather to apply other policy solutions, such as competition and privacy laws.
More generally, people are right to call out the need to think about the impact of these tools on existing artists and content creators, and the political economy of the current tech sector. But a full accounting can and should factor in the creator-users of these tools as well, both the ones that are emerging today and those that may come in the future.
Derek Slater is a tech policy strategist focused on media, communications, and information policy, and is the co-founder of consulting firm Proteus Strategies. Previously, he helped build Google’s public policy team from 2007-2022, serving as the Global Director of Information Policy during the last three years. He led a global team of subject matter experts on access to information, content regulation, and online safety, and testified before legislators in the US, UK, and elsewhere around the globe. Before his time at Google, Derek was the Activism Coordinator for the Electronic Frontier Foundation and the first student fellow at Harvard’s Berkman Center for Internet and Society.