June 2025 Tech Litigation Roundup
Melodi Dinçer / Jul 11, 2025Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.

Ground Up and Spat Out / Janet Turra & Cambridge Diversity Fund / Better Images of AI
June’s Legal Landscape: Movement in Generative AI “Fair Use” Doctrine, ChatGPT Integration and the GDPR, and a Sea Change in Platform Liability from Brazil’s Highest Court
This roundup gathers and briefly analyzes tech-related cases across a variety of legal issues. The Tech Justice Law Project (TJLP) tracks these and other tech-related cases in the US, federal, state, and international courts in this regularly updated litigation tracker.
If you would like to learn more about new cases and developments directly from the people involved, TJLP hosts a regular tech litigation webinar series! Past sessions have explored the ongoing lawsuit against Clearview AI, a notorious facial recognition company, NetChoice’s constitutional challenges to California’s kids’ online safety laws, and various lawsuits and proposed legislation targeting surveillance pricing and wage systems. In July, we will host a webinar on reproductive data justice, focusing on Washington’s My Health My Data Act and potential cases that could be brought under it (and similar laws). If you are interested in learning more, please complete this RSVP form.
This month’s Roundup highlights the following cases:
Disney Enterprises, Inc. v. Midjourney, Inc. (C.D. Cal. Case No. 2:25-cv-05275) – On June 11, Disney and Universal Studios (including Marvel, Lucas Film, Twentieth Century Fox, and DreamWorks) sued the creators of Midjourney, a text-to-image generative AI product, claiming they infringed on the companies’ copyright-protected content to train their models.
Meta v. Joy Timeline HK (filed in Hong Kong) – On June 12, Meta sued the parent company behind Crush AI, an “AI nudifier” app that allows users to upload photos of people and create nonconsensual intimate imagery of them, for repeatedly running thousands of ads for the app across Meta platforms despite Meta’s attempts to remove them.
Media Matters v. Federal Trade Commission (D.D.C. Case No. 1:25-cv-01959) – On June 23, nonprofit research center Media Matters sued the FTC, claiming the Agency was facilitating an “ongoing campaign of retribution” seeking to further punish Media Matters for research connecting ads on Elon Musk’s X to neo-Nazi and white supremacist content.
Bartz et al. v. Anthropic PBC (N.D. Cal. Case No. 3:24-cv-05417-WHA) – On June 23, a federal judge ruled that Anthropic’s buying and digitizing certain books to train its large language models (LLMs) constitutes fair use, an exception to copyright infringement liability, but that fair use does not protect pirated books that Anthropic used to train their models.
Kadrey et al. v. Meta Platforms, Inc. (N.D. Cal. Case No. 3:23-cv-03417) – On June 25, a federal judge ruled that Meta’s similar use of the plaintiffs’ creative works, including pirated copies of their books, to train its LLMs constitutes fair use, as plaintiff-artists failed to show that Meta’s conduct harmed the market for books and related content generally.
Getty Images et al. v. Stability AI (High Court of Justice Business and Property Courts of England and Wales Case No. IL-2023-000007) – On June 25, Getty Images—which maintains one of the largest and most comprehensive photo archives—dropped its primary copyright infringement claims against Stability for using Getty’s images to train Stable Diffusion and produce similar outputs, since Stability did the model training outside of the UK and Getty lacked strong proof of how Stability acquired this data.
NOYB – European Center for Digital Rights v. Bumble Holding Limited et al. (Austrian Data Protection Authority Case No. C-099) – On June 26, NOYB (“None of Your Business”) filed a complaint claiming that dating platform Bumble violated the GDPR, EU’s data protection law, when it integrated with OpenAI’s ChatGPT product to fuel the so-called AI Icebreakers feature to “Bumble for Friends” without first receiving adequate consent from users and without having a legitimate interest in sharing their data with Open AI.
Marco Civil da Internet. On June 26, Brazil’s Supreme Court issued a significant ruling (in Portuguese) that declared Article 19 of the Marco Civil da Internet (MCI) unconstitutional, allowing individuals to sue social media companies when, after a victim seeks content removal, companies continue hosting illegal content, effectively ordering companies like Google, Meta, and TikTok to actively monitor content that involves hate speech, racism, and incitements to violence, among other categories of unlawful content.
Free Speech Coalition, Inc. v. Paxton. On June 27, the US Supreme Court held that a Texas law requiring commercial websites to verify visitors’ ages before showing them obscene content did not violate the First Amendment. The Court applied intermediate scrutiny review, rejecting the more stringent strict scrutiny review as it would “call into question all age-verification requirements,” including in-person examples like checking a person’s ID before allowing them to purchase alcohol, firearms, and pornography.
Utah v. Snap, Inc. (Third Judicial District Court, Salt Lake County) – On June 30, the Utah Division of Consumer Protection and Attorney General sued Snap over Snapchat’s use of deceptive design features that allegedly addict children to the platform, including the rollout of Snap’s “My AI” chatterbot product to users of all ages that integrated with OpenAI’s ChatGPT.
Federal judges apply fair use doctrine to copyright claims concerning generative AI model training data
It often takes several years for a lawsuit to work its way through the courts. This month, two cases filed back in 2023 concerning the legality of training generative AI products on copyright-protected artworks finally reached decisions at the summary judgment stage. In both cases, the respective federal judges ruled in favor of generative AI developers and against copyright-holders, who include writers whose works were used to train proprietary large-language AI models without permission or compensation.
While the outcomes of these decisions may seem similar at first glance, the judges departed dramatically in their approaches to the legal analysis, as well as their understanding of the harms involved. And while various news sources rushed to declare these two decisions as resounding victories for AI companies, both judges left ample room for future challengers to the use of artistic works without artists’ consent.
Both decisions turned on the concept of fair use, which has its roots in common law (common law refers to laws that emerge from judges considering case after case on a particular topic, as opposed to laws passed by legislatures). Although fair use was also written into the Copyright Act of 1976, the judge-created fair use doctrine was a key aspect of the decisions. Briefly, copyright grants an artist or owner exclusive rights to use or reproduce an artistic work. If someone else wants to use the work, they must first seek permission or compensate the copyright holder. The purpose of copyright, thus, is to give artists an incentive to keep creating because they can control and profit from later uses of their works.
Fair use is an exception to these exclusive rights; it allows limited use of the work without the copyright holder’s permission or compensation. It typically applies to nonprofit educational uses, as well as transformative uses. What counts as sufficiently “transformative” differs from courthouse to courthouse, however, with federal courts throughout the country applying slightly different tests to gauge when a use that would otherwise violate the copyright can be excused as transformative enough to convey some distinct, separate meaning from the original work.
Wading through this murky legal landscape, two federal judges issued decisions on whether the use of large amounts of copyrighted materials to train generative AI models is a transformative use that qualifies as fair use.. First, on June 23, Senior Judge William Alsup in San Francisco granted Anthropic partial summary judgment over a group of authors’ claims of copyright infringement in training the models behind Claude AI. The details of Anthropic’s alleged copying and training were crucial to the judge’s fair use analysis. The judge found that, assuming the LLMs memorized all training data copies of the copyright-protected books, Anthropic’s retaining those training copies in a library was transformative enough to count as fair use.
Relying on a metaphor that compared LLM training to how human beings read and memorize books, Judge Alsup claimed that authors cannot use copyright to make people pay each time they read a book, recall it from memory, or draw upon it later when writing new things. He diminished any potential harms to artists and copyright-holders resulting from AI-derived outputs, equating them with an “explosion of competing works” that would stem from “training schoolchildren to write well.” He went on to anthropomorphize the end-product and ascribe it benign motivations, stating that “[l]ike any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them – but to turn a hard corner and create something different.” Specifically because Anthropic had bought some of the books, cut them up, digitized them, and then trained on them, that was sufficiently transformative.
Although the media was quick to call this a win for AI companies, Judge Alsup’s decision also crucially rejected fair use for pirated works used to train such models. Anthropic allegedly used a massive, open-source dataset called “The Pile” to train its models, which includes Books3, LibGen, and Pirate Library Mirror, which are massive “shadow” libraries of pirated e-books; the lawsuit claims that Anthropic knowingly downloaded and used these piracy-riddled datasets throughout its model training. Judge Alsup held that Anthropic’s use of pirated books from these shadow libraries to create its own general, permanent library was copyright infringement. In the immediate case, this means the case goes forward on these claims, potentially resulting in a trial on the damages and a ruling on Anthropic’s willfulness in infringing. That using pirated works knowingly does not constitute fair use is significant for the simple reason that many commercial LLMs were trained on similar datasets and sources, many of which include these same shadow libraries. This includes Meta’s LLM, Llama, which is at issue in the second case.
On June 25, San Francisco-based federal judge Vince Chhabria granted Meta partial summary judgment against a celebrity-laden group of 13 authors’ copyright infringement claims. Moreso than Judge Alsup, Judge Chhabria’s focus was on the underlying principles of copyright protections and fair use. His decision explains that most of the time, companies that feed copyright-protected materials into their models without permission or compensation to rights holders do violate copyright law. This is because it strikes at the core point of copyright protections, “preserving the incentive for human beings to create artistic and scientific works.” For Judge Chhabria judge, fair use should not apply when copying works “will significantly diminish the ability of copyright holders to make money from their works,” and generative AI has the “potential to flood the market with endless amounts of images, songs, articles, books, and more,” which will ultimately undermine the incentive for people to make things “the old-fashioned way.”
While copying something to train a model may be literally transformative in that it transforms data into something else, the judge rejected such a clean analogy to the law of fair use, where “transformative” has a specific legal meaning and is fact-dependent. He thus rejected Judge Alsup’s “inapt analogy,” writing that “using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would take otherwise.” What mattered more were the impacts of companies’ conduct on the markets for artistic works, noting that “harm to the market for the copyrighted work is more important than the purpose for which the copies are made.” The existence of products that can spin off entire paintings and novels in a few minutes, with little to no creative effort, strips artists of any incentive to create works, and this goes against the whole point of copyright protections, which are to incentivize artists to create by protecting their economic interests. Ultimately, however, the judge ruled for Meta because the plaintiffs failed to make the market-dilution argument and show how this use of their works harmed their respective artistic markets.
In addition to these big decisions, there were two more litigation updates at the intersection of AI and copyright.
First, Disney and NBC Universal sued Midjourney for copyright infringement, claiming Midjourney is a “quintessential copyright free-rider and a bottomless pit of plagiarism” that ripped the companies’ valuable IP when training its text-to-image product and will soon allow users to reproduce their copyrighted characters in an upcoming text-to-video product. The media companies’ complaint alleged direct copyright infringement—where Midjourney is directly responsible for infringing on their copyrights through training models on their IP and also producing outputs that are derivative—and secondary infringement, on the theory that Midjourney may argue that its subscribers are the ones infringing on Disney and Universal’s copyrights, but that Midjourney should be vicariously liable as it is presumably in a position to control and prevent such unlawful uses. The companies seek statutory damages under the Copyright Act for each instance that a Disney or Universal character was unlawfully infringed, which could result in astronomical amounts of money that Midjourney would have to pay if the companies are successful.
Second, Getty Images dropped its direct copyright infringement claims in its UK-based lawsuit against Stability AI, which developed and runs Stable Diffusion. The main reasons Getty dropped those claims are practical. Stability trained its models outside of the UK through Amazon Web Services-hosted servers, so the London-based court likely cannot resolve those claims. Further, Getty did not have adequate evidence about how Stability acquired Getty’s copyright-protected photographs, nor how those copyright-protected images were being reproduced and derived in Stable Diffusion outputs. The case will now focus on Getty’s other claims, including trademark infringement and secondary copyright infringement. For example, Getty alleges that Stable Diffusion sometimes reproduces Getty’s watermarks on output images, potentially infringing Getty’s trademark protections. Getty also has a companion case pending in federal court in Delaware, which it has not amended yet despite the recent rulings in the fair use cases discussed above.
Meta sues nudify 'App Crush AI' for bypassing its ad review processes
On June 12, Meta sued the parent company behind Crush AI, an “AI nudifier” app that allows users to upload photos of people—often exclusively women—and engage in image-based sexual abuse by removing their clothing through algorithmic means. Meta alleges that the company repeatedly ran tens of thousands of advertisements for the app across Meta platforms, despite Meta’s repeated attempts to block such ads and remove them. Although Meta generally blocks the search terms “nudify,” “undress,” and “delete clothing,” the deluge of AI-generated ads has made it difficult, if not impossible, for platforms to stay on top of prohibited content. Crush AI’s parent company, for example, set up dozens of advertiser accounts and frequently changed domain names to evade or override Meta’s blocks; at one point, the company was also able to run a Facebook page promoting its services.
Meta’s legal action comes at a time of heightened concern over the proliferation of image-based sexual abuse, as well as traditional misinformation, through generative models, many of which are trained on exploitative and illegal images of abuse. Meta itself has been criticized for not doing enough to curb nudify apps from advertising, including allowing ads featuring explicit deepfake images of celebrities. Since the Take It Down Act was enacted in May, companies like Meta may be more likely to police their platforms to more effectively remove and thwart nudify apps and other emergent methods of facilitating abuse through ‘artificial intimacy’ systems.
Brazil’s Supreme Court declares influential law inspired by Section 230, unconstitutional
Social media companies like Meta are generally immune from lawsuits concerning their user-content moderation actions (or lack thereof), thanks to Section 230 in the US and similar laws globally. On June 26, however, Brazil’s Supreme Court declared Article 19 of the Marco Civil da Internet (MCI) unconstitutional, allowing individuals to sue social media companies directly for hosting unlawful content if they refuse to remove it after a victim seeks removal directly from the platform. Originally, the Article required a court order to hold companies civilly liable for user content. However, the ruling found that this framework did not protect key constitutional interests, noting a “partial legislative omission” by the lawmakers that passed the MCI and urging the Brazilian National Congress to consider new laws that address those gaps.
There are several important consequences of this ruling. Companies are now subject to liability for damages caused by user content in cases of crimes or other illegal activity, and this includes harms caused by fake accounts that are reported but not removed. Companies are also presumed liable when unlawful content involves paid ads, algorithmically-boosted content, or coordinated inauthentic behavior (such as bots) to promote such content.
The ruling also introduces a duty of care, where companies can be held liable for failing to immediately remove criminal content, including anti-democratic acts, terrorism, sexual crimes, and human trafficking. It requires companies operating in Brazil to establish a legal representative there, develop self-regulatory frameworks, and provide transparency reports on content moderation decisions. It also carves out an exception for certain crimes against honor, however, including defamation. This significant ruling has the potential to shape how powerful social media platforms approach content moderation in the future, as countries increasingly may shift away from granting tech companies such broad legal immunity when algorithmically amplified content causes real-world harm.
US Supreme Court upholds constitutionality of Texas age verification law
On June 27, the US Supreme Court held that a Texas law requiring commercial websites to verify visitors’ ages before showing them obscene content did not violate the First Amendment. Age verification laws primarily target pornographic websites and adult content, attempting to limit minors’ ability to access it online. As of early this year, six states have enacted age verification laws, while an additional 21 bills were introduced across the country. In this case, Free Speech Coalition v. Paxton, a coalition mostly made up of adult websites, challenged a Texas age verification law for burdening adults’ ability to access pornography by requiring an age verification process. They argued that such a requirement is unconstitutional because it would make adults less willing to exercise their First Amendment rights to access pornography online. Defending the law, Texas argued that states have the constitutional authority to restrict minors’ access to obscene content, regardless of the potential effect on other users.
In its decision, the majority found that pornographic content received First Amendment protection for adults but not necessarily for children, and that states have historically been able to prevent minors from accessing obscene content but not adults. Next, it applied intermediate scrutiny, rejecting the more stringent strict scrutiny review, arguing it would “call into question all age-verification requirements,” including in-person corollaries like checking a person’s ID before allowing them to purchase alcohol. “Obscenity,” they wrote, “is no exception” to this well-established practice. Finally, although age verification might make it harder for adults to view such content, “adults have no First Amendment right to avoid age verification,” and the law was specific to only restricting minors’ access to it.
The outcome signals that age-assurance laws are likely constitutional, so long as they do not overly burden adults’ ability to access legal adult content. For example, laws that may require affidavits of age from biological parents, laws that require people to register with the government, and similarly burdensome requirements would not pass intermediate scrutiny review. The Court remained vague on what exactly constitutes an “excessive burden,” however. Additionally, it did not address potentially significant data privacy concerns for websites and third parties conducting age verification processes, assuming that they have “every incentive to assure users of their privacy”—despite growing proof that these companies are disincentivized to do so without being forced to by government regulation and oversight, as well as private litigation.
Bumble may be violating the GDPR with its 'AI icebreaker' feature
On June 26, European digital rights group noyb (which stands for “none of your business”) filed a complaint with Austria’s data protection authority against Bumble for its “AI Icebreakers” feature. Noyb alleges that in December 2023, Bumble’s “Bumble for Friends” app introduced a new “Icebreakers” feature that gives users pre-drafted messages through OpenAI, which involves sharing user profile data with the company to create messages personalized both for the user sending the message and the user receiving it.
Noyb alleges this feature violates the GDPR, specifically Bumble did not comply with the user consent requirements under Article 6(1)(a) where the company only allows users to click “Okay” to the feature after persistently showing a pop-up explaining it, even when users click out of the pop-up—an example of deceptive or “dark pattern” design that erodes user choice. Further, noyb alleges Bumble did not have a legitimate interest under Article 6(1)(f) to process users’ personal data for the feature. In a statement, a noyb lawyer argues that Bumble is only sending such data to OpenAI because it is “so desperate to get in on the AI hype that it is trampling on users’ fundamental rights in the process.”
After FTC starts investigation, Media Matters sues to defend researchers’ rights
An earlier roundup summarized digital research center Media Matters for America’s back-and-forth legal struggle with Elon Musk’s X after its research report inspired an advertiser boycott in which X lost up to $75 million in ad revenue. In May, the Federal Trade Commission opened an investigation into Media Matters over whether the group illegally colluded with advertisers, seeking a wide range of internal documents, including communications with other tech accountability groups. Media Matters’ president described the FTC investigation as a form of intimidation, echoing a federal appeals court’s recent findings that “Media Matters is the target of a government campaign of retaliation” for its effective research on X. Meanwhile FTC Commissioner Andrew Ferguson has previously claimed that “the risk of an advertiser boycott is a pretty serious risk to the free exchange of ideas,” justifying attempts to silence—or at least intimidate into silence—tech accountability researchers and similar groups.
On June 23, Media Matters filed a complaint in DC federal court arguing the FTC’s investigation is an attack on its First Amendment rights. The group asks the Court to help “stop” the “campaign of retribution” against them for researching and reporting on “matters of substantial public concern—including how X.com has enabled and profited from extremist content that proliferated after Elon Musk took over the platform formerly known as Twitter.” This is a well-studied phenomenon for which Media Matters seems to be bearing the brunt of the backlash, though many disinformation researchers were also targeted by Republicans in Congress in recent years. Still, Media Matters may be the canary in the coal mine, signaling that other advocacy and watchdog organizations that have been critical of President Trump or his allies could face similar retaliation.
Utah sues Snap for addictive features, including 'My AI' chatbot
On the final day of the month, Utah’s Attorney General and Division of Consumer Protection filed a complaint in state court against Snap, Inc. over Snapchat’s use of deceptive design features that allegedly addict children and teens to the platform. These features include ephemeral or disappearing messages, social comparison metrics like streaks that reward continual use, beauty filters, personalization algorithms that induce compulsive scrolling, and Snap Map, where users share their locations with peers and strangers.
The complaint also targets Snap’s 'My AI' virtual chatterbot feature, which has prompted alarm from users and parents for a variety of harmful conduct, including collecting private user information like geolocation and providing harmful outputs to children, including advice on illicit sexual interactions with adults. This case merges earlier approaches to tech accountability, including the ongoing social media teen addiction and products liability multi-district litigation against Meta, as well as the ongoing products liability lawsuit, against the chatbot platform CharacterAI and Google after the tragic death of an emotionally dependent teenager (note: TJLP is co-counsel in the CharacterAI lawsuit).
Authors
