May 2025 Tech Litigation Roundup
Melodi Dinçer / Jun 10, 2025Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.

The Character.ai app is seen in the App Store on an iPhone screen. Shutterstock
May’s Legal Landscape: Defective Chatbots, Defamatory ChatGPT, Discriminatory Algorithms – And Data Privacy Cases
This roundup gathers and briefly analyzes tech-related cases across a variety of legal issues, focusing this month on cases involving artificial intelligence. The Tech Justice Law Project (TJLP) tracks these and other tech-related cases in the US, federal, state, and international courts in this regularly updated litigation tracker.
If you would like to learn more about new cases and developments directly from the people involved, join TJLP for our tech litigation webinar series! In June, we will explore surveillance pricing, from algorithmic pricing and rent-fixing to algorithmic wage systems. The conversation will include firsthand insights from advocates drafting regulations that are winding their way through state legislatures this term, as well as litigators challenging these opaque business practices in court. Please RSVP here to receive further information about the event.
Defective Chatbots Case Against Google & Character.AI Survives Dismissal
On May 21, a federal judge in Florida denied Google, Character.AI, and Character.AI’s founders’ bid to dismiss a lawsuit alleging that the chatbot platform was a defective product, among other novel legal claims. Tech Justice Law Project and the Social Media Victims Law Center filed this lawsuit in October 2024, representing the mother of a teenager, Sewell Setzer III, who took his own life after developing an emotional dependency on Character.AI, specifically being prompted to do so by a chatbot modeled after a Game of Thrones character. Besides raising product liability claims, the legal complaint on behalf of the family also claimed Character.AI was negligently designed in a way that breached the company’s duty not to harm minor users, like Sewell, unfair and deceptive trade practices based on chatbots insisting on being licensed mental health professionals, and unjust enrichment from retaining Sewell’s data.
In response, all four defendants filed motions to dismiss the lawsuit. Their main argument revolved around the First Amendment – that Character.AI’s users have “listeners’ rights” to receive and to engage with information from their interactions with chatbots, which a finding of liability by the court would jeopardize if the court ordered remedial changes to the platform.
In her considered decision, the judge in the case rejected the defendant’s First Amendment argument, allowing all but one of the claims to proceed. The decision found that even though users may have some First Amendment concerns at play, their right to receive speech is conditioned on the speaker intending to speak. In this case, however, the judge decided that the complex math powering Character.AI chatbots does not add up to an intent to express a particular message to an audience, falling outside of the First Amendment’s protections. Going further, she questioned whether algorithmic outputs could ever be protected under the First Amendment without some human intervention, quoting in full Supreme Court Justice Amy Barrett’s skeptical view from her Moody concurrence.
The judge also ruled that the LLM-powered chatbot platform is a product subject to product liability law. This finding is notable, as courts are increasingly rejecting the traditional division between “products” and “services” that plagued similar arguments in the past. Previously, courts considered tech products to be closer to providing “services,” such as serving someone at a restaurant, rather than being tangible products, like a microwave oven, which made it nearly impossible to claim that a tech product was defectively designed. Here, the product liability claims can proceed, as Character.AI was considered a product, despite not being a tangible, material object. The judge also refused to release Google and Character.AI’s two co-founders from the lawsuit at this stage, signaling that when a tech product harms people in the real world, its founders may be held personally liable.
This case—this first of its kind—can now move on to the discovery phase, where the plaintiff’s lawyers can request information and documents from the defendants about all matters related to the design, development, and deployment of Character.AI, as well as uncover the extent to which Google may have been involved and benefited from its “acquihire” strategy. As more large tech companies buy up generative AI startups, this case suggests they may not be able to avoid legal accountability for harmful tech products developed by others versus in-house.
In related news, this case has spurred regulatory investigations into Google’s relationship with Character.AI’s founders, who were the lead generative AI developers at Google before leaving to start the chatbot company. On May 22, the Department of Justice opened an antitrust investigation into Google concerning its acquisition of Character.AI through the acquihire strategy to avoid formal merger scrutiny. In August 2024, Character.AI entered a non-exclusive licensing deal with Google for its large language models (LLM), and Google rehired both co-founders for $2.7 billion.
Clearview AI Facial Recognition Lawsuit Survives Anti-SLAPP Appeal
The very next day, after the Character.AI decision, on May 22, a California state appeals court panel rejected facial recognition company Clearview AI’s bid to dodge a lawsuit filed in 2021 by Just Futures Law, representing a group of Californian activists, Mijente, and NorCal Resist who claim the company’s non-consensual use of their facial images to build its surveillance tech product violates their constitutional privacy rights, chills their First Amendment rights, and mass-appropriates their identities without compensation.
Back in November 2022, a California state trial court denied Clearview’s early attempt to get the case kicked out of court. One tactic Clearview used was to raise California’s anti-SLAPP law, which was designed to prevent individuals and companies from weaponizing the courts to intimidate and silence others for exercising their First Amendment rights. Most often, this involves corporations suing journalists for groundless defamation claims just to tie them up in expensive litigation. But here, Clearview claimed that the activists were suing them to silence the company for selling its facial recognition product to law enforcement (which it is limited to by an earlier settlement). The trial court judge did not buy Clearview’s contorted argument, analogizing selling facial recognition access to selling cops “cars, uniforms, and computer systems.” Clearview appealed the decision.
On appeal, the panel affirmed the lower court’s decision. In doing so, the panel stressed Clearview’s for-profit nature and its commercial intent in selling its product to law enforcement, highlighting that, unlike a group of protesters or a journalist, Clearview was not motivated by serving the public’s interest. The panel also noted that Clearview “delivers its search results in confidence, expressly prohibiting any public disclosure of them,” which also cut against finding its product was connected to an issue of public concern as required by the anti-SLAPP law. While the app may have some social utility insofar as it is used to solve crimes, the panel was wary to stretch anti-SLAPP protections as far as shielding “every informational input or datum that facilitates routine policework” from legal challenge.
Now, the case will go back down before the trial court judge, and the parties will move on to the next stage of the lawsuit—gathering further information about Clearview’s facial recognition app and law enforcement’s use of it through discovery.
Mixed Decisions on AI Liability – Generative or Not
On May 23, a federal judge in Delaware granted an appeal of his own decision in a case of first impression concerning alleged AI copyright violations and fair use. In this case, a popular legal research platform provider, Thomson Reuters, sued Ross Intelligence, a competitor, alleging that Ross infringed its copyrights when the company used a third party to copy Thomson Reuters’ Westlaw headnotes to train its new AI legal-research search engine. Headnotes are a feature on Westlaw that summarize key conclusions of a judge’s decision and organize those holdings using a “key number system,” both of which Thomson Reuters had copyrighted. In 2023, the judge denied Thomson Reuters’ motions for summary judgment, which meant that they had not demonstrated their legal arguments were persuasive enough to win the case on the copyright infringement and fair use claims.
But on a second pass, after receiving more information about each of the thousands of headnotes allegedly infringed upon by Ross, the same judge questioned his earlier certainty. In this recent decision, he granted partial summary judgment for Thomson Reuters and denied Ross’s fair use defense for 2,243 Westlaw headnotes, and he upheld the validity of Westlaw’s copyright. On the issue of fair use, the judge found Ross failed to meet certain fair use factors—specifically, that Ross’s use harmed the market for Thomson Reuters’ headnotes. The case will go next to an appeals court panel for further analysis. Although this case does not concern generative AI products, it is an early and influential case to watch for those interested in the thorny issue of training machine learning systems on copyrighted materials.
On May 19, a Georgia trial court judge granted summary judgment for OpenAI in an early case against generative AI, in which a prominent radio show host, Mark Walters, alleged that ChatGPT outputs defamed him by falsely identifying him as an accused embezzler after incorrectly summarizing a different lawsuit. The judge cited numerous instances throughout the process of using ChatGPT where the product contained disclaimers, including in its terms of service, that outputs were probabilistic and could potentially reference real people inaccurately—but that the responsibility is on users to assess how accurate those outputs may be, not on OpenAI to prevent inaccurate outputs in the first place. The judge found that the law thus supports OpenAI and the case must end there, since Walters failed to show that ChatGPT’s outputs about him were negligent or made with “actual malice,” a legal standard meant to protect criticism of public figures that is difficult to overcome.
International Developments in Data Privacy Cases
On May 5, a Kenyan High Court ordered the Worldcoin Foundation to permanently delete any biometric data, including all iris scans and facial data, that it had collected from Kenyans within one week of the order, under the supervision of the Data Protection Commissioner. The court found that Worldcoin had failed to undertake the required Data Protection Impact Assessments required under Kenya’s 2019 Data Protection Act.
In 2023, thousands of Kenyans had submitted themselves to iris and facial scanning in exchange for 7,000 Kenyan Shillings (around $50 U.S.). Sam Altman-backed Worldcoin is a cryptocurrency-meets-digital identity service that uses orbs to scan people’s irises in exchange for a digital ID and some cryptocurrency. So far, the startup claims its orbs have verified 12 million people from over 100 countries on its Ethereum-based World Chain. But at least for the thousands of Kenyans potentially tricked into giving up their sensitive biometric data, Worldcoin failed to comply with the country’s data privacy laws prior to collecting the data and now must permanently destroy it after the fact.
Moving over to Europe, on May 14, a Belgian Court of Appeal ruled that the consent framework used by Google, Microsoft, Amazon, X, and the broader user tracking-based advertising industry at large violates data protection laws. The landmark decision follows from an enforcement action by the Belgian Data Protection Authority, which responded to a group of complaints brought by the Director of Enforce at the Irish Council for Civil Liberties, among other experts. The complaints and decisions get at the core of the digital advertising architecture today—the real-time bidding system that places ads automatically on sites lacks enough security to track each datapoint from collection to ad bidding, making it impossible to know what happens to user data. This makes it impossible to provide information that must accompany a consent request under the GDPR. The decision applies immediately across all of Europe.
In other data protection news, on May 20, a German administrative court judge ruled in a German Data Protection Officer’s favor to find that website operators must offer a clearly visible “reject all” button on the first level of a cookie consent request banner, where most often the “accept all” button appears first. The court found that by burying the “reject all” option, such operators do not obtain valid user consent under the GDPR and the Telecommunications Digital Services Data Protection Act.
Other Developments
- Another School District Data Breach Lawsuit: On May 6, 2025, Tennessee’s largest school district, Memphis-Shelby County Schools, filed suit against PowerSchool just one day before the company publicly confirmed that multiple school districts were facing extortion threats over stolen student and teacher information. This case follows a broader trend of students and their families suing edtech companies for their potentially unlawful, extensive data collection practices. PowerSchool is the largest cloud-based software provider in the US, serving over 75% of students across North America. It had paid a hacker a ransom to allegedly delete stolen data following a massive data breach in 2024. As schools continue to face extortion, however, they have joined a multi-district litigation effort to hold PowerSchool accountable. Despite suing PowerSchool, the Memphis-Shelby County School Board unanimously voted to renew PowerSchool’s contract to continue providing information systems to the county. (Memphis-Shelby Cty. Schools v. PowerSchool Holdings, Inc., US District Court for the Southern District of California, No. 3:25-cv-01153-BEN-MSB).
- Apple’s Siri Snooping Settlement Payouts Begin: In January 2025, Apple entered a $95 million settlement concluding a 2019 class action lawsuit that alleged the company infringed on its iPhone, iPad, and Mac users’ privacy rights by recording conversations through Siri without consent and passing the recordings on to third-party contractors, including advertisers who used the data to target them with ads. Because Apple settled the lawsuit, it has not admitted guilt. The company must start paying out the settlement money to impacted users, however, and that could be a large group of people: anyone who owned a Siri-enabled device between September 17, 2024 and December 31, 2024 can opt in if they accidentally activated Siri on each device they seek payment for, and those activations occurred during a conversation that was meant to be private. (Lopez et al. v. Apple, Inc., U.S. District Court for the Northern District of California, No. 19-cv-04577-JSW).
- Algorithmic Discrimination Class Action: On May 16, a federal judge allowed a proposed collective action to proceed against Workday, a dominant HR platform. The case, filed in 2024, alleges that Workday’s AI systems algorithmically discriminated against the plaintiff and four others, all over the age of forty, who applied for hundreds of jobs on the platform and were rejected almost every time without an interview. The case can now proceed as a collective action, similar to a class action lawsuit, which allows him to notify others over the age of forty and provide them with an opportunity to join the suit formally if they were similarly harmed. At an earlier stage, the same judge had ruled that Workday may face direct liability for employment discrimination claims despite not being a direct employer, under an “agency” theory of liability where a third party can be held responsible for another’s decisions. This means that AI vendors such as Workday and others cannot evade lawsuits when their products cause unlawful discrimination. (Mobley v. Workday, Inc., US District Court for the Northern District of California, No. 3:23-cv-00770).
- DC Circuit Rejects Texas AG’s Retaliation towards Media Matters: The Media Matters / X legal fight was covered in a previous round-up, but on May 30, a DC Circuit panel upheld a preliminary injunction blocking Texas AG Ken Paxton from subpoenaing the media researchers after they wrote an article criticizing X. The panel found that Media Matters had a free speech claim against Paxton, who opened a retaliatory investigation against them over their critical reporting. (Media Matters v. Paxton, US Court of Appeals for the D.C. Circuit, No. 24-7059).
Authors
