Beyond Fair Use: Better Paths Forward for Artists in the AI Era
Tristan Williams / Dec 19, 2024If you asked many artist advocates what we should do to protect musicians from the potential harms of AI music generation, you’re likely to run into a common theme in the responses: artists must consent to and be compensated for the use of their works in training AI models.
Suno and Udio, leaders in the generative AI music space, developed models that were allegedly trained on tens of millions of songs, many of which are copyrighted. Major record labels claim these firms are “trampling the rights of copyright owners” and violating copyright law. Suno and Udio respond that their use of such material is fair use. Who will prevail in the courtroom is still to be determined, but the headwinds seem to favor the proponents of fair use. Understanding why requires a deeper exploration of copyright and fair use, the purpose of each, and how the courts have adjudicated past cases.
The Fair Use Debate
US copyright is designed to ensure artists are compensated fairly for their work when it’s used elsewhere, but there are exceptions. Making copyright too strict limits the ability of people to innovate, to take the existing corpus of works and create something genuinely new from it. When AI firms claim fair use, they’re saying that what they’ve done has gone far beyond the rote reproduction of copyrighted works to transform the work into something fundamentally new.
To start, the technical details of model training, as understood under the law, point to AI music generation being considered fair use. Training models appears to be what is called a “non-expressive” use, where the initial copying is an intermediate step towards producing something that doesn’t contain the original expression. Matthew Sag, an expert in AI and copyright, has noted that courts have consistently found such training to be fair use.
We also have some limited early indication of the court’s view on generative AI models, which portend an uphill battle for rightsholders. For example, in Raw Story Media, Inc. v. OpenAI, Inc., where rightsholders argued that the use of their copyrighted material constituted injury, the judge found their arguments of injury insufficient to establish standing and dismissed the case.
But the courts have further reason to be wary of finding against fair use. Deciding against fair use could be harmful, stifling American innovation and exacerbating the trend toward monopolization in the AI music industry. Training of other generative AI models requires massive amounts of data to be at the cutting edge, and it will likely be similar with AI music. Having to license works would raise the bar further, locking out all but the select few that could raise the necessary capital. The decision could also backfire by pushing innovation overseas rather than protecting artists as a number of countries, including Japan and the UK, broadly allow for the use of copyrighted works in training AI.
It’s often hard to imagine the harm done in rejecting fair use, as those helped are future recipients of the amorphous benefits of an open and innovative market. But examples do exist. At one point, Amazon Kindle introduced a text-to-speech feature for its books, which gave blind people access to works they’d never had before. Rightsholders immediately sued, noting that Kindle hadn’t licensed the works for such purposes, and the feature was shut down. In the case of AI, such a finding against fair use could jeopardize other potentially beneficial models, such as models trained on scientific works used for drug discovery or other medical advancements.
Finally, when society benefits from innovative technologies, the court often finds in their favor. For example, in Authors Guild, Inc. v. Google, Inc., the court ruled that the copying of millions of books to make them searchable was fair use, concluding that Google’s service “augments public knowledge” without substituting for the initial works. Other countries like South Korea and Singapore have taken note, recently incorporating fair use protections out of a recognition of the benefits that flow from such policies.
Some have responded that AI music might violate this fourth factor, substituting for the initial work in the marketplace. Still, court decisions have indicated what’s meant by such language is something far more specific than merely creating a more competitive marketplace. Substitution here refers to an output that draws market share away from the original works by being expressively similar to that work. Generating a song in Drake’s style, with Drake’s voice, which draws market share away from his work, might violate this principle. Generating a rap song from a model trained on Drake's work, among others, which is entirely different in style, isn’t likely to be considered a substitution.
All of this is not to say that artists and advocates should give up fighting to protect themselves from the risks of AI music generation. Instead, I wish to highlight that putting all their eggs in the basket of fair use is incredibly risky and unlikely to work out, and now is the time to begin pursuing other options. Labeling AI-generated music is one such solution.
Labeling AI Music
One concern with a fair use ruling is that platforms will be overrun with AI-generated music. If users cannot discern human-generated music from AI-generated music, they won’t be able to support musicians consistently. Whether or not people will prefer human generation over AI is still to be seen, but at least giving those who do the opportunity to do so seems like an important first step, which is where labeling AI generative content comes in. Although it has the potential for widespread support, there are two logistical challenges for a successful implementation.
First, how can we ensure that AI-generated content gets labeled and stays labeled? Regulators must rely on tools that detect AI-generated content or require developers to label data themselves. French streaming platform Deezer intends to label AI-generated content via an internal detection tool it is developing. But given the unreliability of detectors in other domains, there’s reason to be skeptical that AI-generated music detectors will be much better. A better approach might be to take a page out of the EU AI Act and require developers themselves to make sure their content is automatically labeled in the metadata (the underlying technical details of a piece of media) in a way that platforms can easily use to then visibly label the content themselves. Such an approach is likely technically feasible: there are reports that companies might already have such technology for text generation but haven’t released it due to profit motives.
Second, how should AI generation be disclosed to music listeners? Regulators could work with music streaming platforms to figure out what a realistic labeling regime looks like. Artists and advocates could help answer how the use of AI in generating a song might be captured in submissions of songs to the platform and how such labeling of songs can be clear but not impede the listening experience for AI-generated songs.
The AI Labeling Act of 2023 does just that, requiring the media’s metadata to show that AI was used to create the content and requiring “clear and conspicuous” disclosure of AI-generated content for users. Musicians who wish to protect fellow musicians should write to their representatives in support of this bill.
Beyond music labeling, there are larger structural reforms that may mitigate the harm from AI and resolve existing problems in the industry. The vast majority of new artists struggle to break out, stuck making less from music than they’d need to support a career in it, struggling against algorithms that favor white noise to actual music and a royalty system that favors major artists. One solution is switching from an “artist-centric” payout system to a “user-centric” one, where users are given more control over where their subscription fee goes. Such a system would empower users to ensure what they pay goes fully to human-generated music and would likely help more artists find a consistent revenue stream, a trial run on Soundcloud indicating disproportionate increases in revenue for smaller artists.
Neither of these are complete solutions, nor are they the only ones we might focus on. The point is that they are solutions that seem both feasible and beneficial and could use further people helping develop the ideas and advocating for them to make them a reality. A future where artists thrive alongside AI music will come from exploring a diversity of solutions, not from betting it all on fair use.