October 2025 Tech Litigation Roundup
Madeline Batt, Melodi Dinçer / Nov 14, 2025Madeline Batt is the Legal Fellow for the Tech Justice Law Project. Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.
October sees a range of tech and law challenges, from constitutional to copyright and beyond.
The Tech Litigation Roundup gathers and briefly analyzes notable lawsuits and court decisions across a variety of tech-and-law issues. This month’s roundup covers the following cases:
- U.A.W. v. Dep’t of State (S.D.N.Y. Case No. 25-cv-8566) - Unions sued the Trump Administration for its use of social media surveillance to further repressive immigration enforcement.
- Reddit v. Serpapi (S.D.N.Y. Case No. 25-cv-8736) - Reddit sued data scrapers and Perplexity AI for surreptitiously copying content from its site for LLM training.
- Florida v. Roku (Florida Circuit Court of the 20th Judicial District, Collier County Case No. 233525993) - Florida sued Roku for violating the Florida Digital Bill of Rights, including allegations that Roku harvests children’s data unlawfully.
- Students Engaged in Advancing Texas v. Paxton (W.D. Tex. Case No. 25-cv-1662) - Students challenged Texas’s new age verification law, claiming it violates the First Amendment.
- Computer & Communications Industry Association v. Paxton (W.D. Tex. Case No. 25-cv-1660) - An industry group also challenged Texas’s new age verification law, claiming it violates the First Amendment and the Commerce Clause.
- Starbuck v. Google LLC (Delaware Superior Court Case ID N25C-10-211) - Right-wing activist Robby Starbuck sued Google for allegedly defamatory statements generated by its chatbots.
- Dr. Rachael Kent v. Apple Inc. (United Kingdom Competition Appeal Tribunal Case No. 1403/7/7/21) - A UK tribunal found that Apple violated competition law in a pathbreaking mass action.
Related litigation is linked throughout the roundup. We also spotlight reports on OpenAI’s discovery tactics.
TJLP would love to hear from you on how this roundup could be most helpful in your work – please contact us with your thoughts.
Unions challenge federal dragnet social media surveillance
Three major unions challenged the US government’s mass social media surveillance practices, alleging that immigration agencies are using AI-powered monitoring to politicize immigration enforcement. The complaint describes how federal officials identify non-citizens who are, for example, critical of the Trump Administration, “American values,” or the recently assassinated right-wing activist Charlie Kirk, and then repress their speech by revoking visas, detaining and deporting non-citizens, and imposing other retaliatory immigration consequences.
The unions argue that the dragnet surveillance violates the First Amendment and the Administrative Procedure Act, harming both the unions themselves and the thousands of workers they represent. In the complaint, the plaintiff unions include surveys of their members showing that the Trump Administration’s surveillance practices have chilled union members’ expression, including their willingness to participate in union activity. The unions are suing to remedy the alleged harm to their members’ free speech rights, as well as their own loss of engagement due to widespread fear that public union participation could trigger retribution.
The case lands amid a broader alignment between major tech firms and President Trump, whose administration has leveraged emerging tech to fulfill campaign promises of mass deportation. This lawsuit represents a prominent challenge to the marriage of tech and Trumpism, as a ruling for the unions could limit how extensively the Trump administration can use AI in its crackdown on speech. It also underscores the risk AI poses when deployed by an increasingly authoritarian-leaning state.
Reddit targets scrapers and Perplexity in new LLM copyright challenge
The development and training of large language models (LLM) like ChatGPT requires extraordinary amounts of input data. From the beginning of the LLM boom, AI developers have faced legal challenges from copyright holders alleging that their works were illegally used for training. Despite the failure of some prior challenges (covered in earlier roundups), the steady stream of copyright suits continued this month, with new class actions against Apple and Salesforce filed by authors.
Notably, a lawsuit filed by Reddit introduces a new twist on the now-familiar story of copyright-protected data being scraped and used to train an LLM. The complaint names not only Perplexity AI—the AI company Reddit alleges uses its content—but also several data-scraping firms that Perplexity allegedly purchases the Reddit-derived content from.
As the complaint explains, data scrapers use automated tools to copy content from websites, enabling scraped data to flow into LLM training sets. Reddit’s massive archive of millions of posts about a wide range of topics, across more than 20 years, makes it an attractive target. However, Reddit does not permit its site to be scraped without specially negotiated agreements and deploys technological defenses to prevent scraping otherwise.
Three of the defendants in the suit are data-scraping companies that allegedly bypass these defenses. The final defendant, Perplexity AI, is accused of buying unlawfully scraped data, which Reddit identified using the digital analogue of a marked bill—content created by Reddit that it made visible only to bots scraping without permission.
Reddit's focus on AI middlemen highlights the broader supply chain behind LLM development. If it succeeds, the case could have significant implications for AI developers that rely on third-party scrapers to feed their models. In addition to its copyright claims, Reddit’s complaint includes tort claims of unfair competition, unjust enrichment, and civil conspiracy.
Child online safety legislation in the courts
An increasing number of states are enacting legislation to protect children online. October saw two of those state laws at issue, including an enforcement action in Florida and two legal challenges to age verification in Texas.
In Florida, the Attorney General sued video streaming platform Roku for harvesting and selling children’s personal data, alleging that Roku’s data practices violate the Florida Digital Bill of Rights. The case will turn in part on what counts as “willful disregard” of a child’s age, potentially clarifying important details about what obligations platforms operating in Florida have to monitor for indications that users are minors. The Attorney General also challenges Roku’s failure to obtain consent before collecting sensitive personal data and its partnerships with data brokers as an alleged workaround to avoid complying with Florida’s requirements.
Meanwhile, in Texas, a sweeping new age verification law is on the defensive. The law requires anyone in Texas to verify their age before downloading any mobile application, and requires minors to obtain verified parental consent for each individual download. The law is facing First Amendment challenges from a coalition of student advocates and an industry association, both arguing that it unconstitutionally burdens minors’ and adults’ access to protected speech. The industry association also contends that the law violates the First Amendment by compelling app developers to state age ratings for their content and violates the Commerce Clause by excessively burdening interstate commerce.
The legality (and desirability) of age verification is an active question. A narrower Texas age verification requirement for pornographic websites was previously upheld by the Supreme Court, but the new law applies far more broadly. With child online safety laws proliferating across states, these cases may shape how similar law develops elsewhere.
AI defamation action among suits targeting harassment and emotional harm caused by tech
Right-wing activist Robby Starbuck filed another defamation action, this time against Google, based on allegations that Google chatbots Bard, Gemini, and Gemma produced false statements about him over multiple years. Starbuck alleges that he repeatedly notified Google of the defamatory outputs, but Google took no action. He previously settled a similar suit against Meta and took on a role with the company as an “AI bias” advisor.
To prevail, Starbuck will need to show that Google was negligent or had actual malice related to its chatbots’ generation of the false statements–a stumbling block for a prior AI defamation lawsuit against OpenAI. His repeated notifications to Google may bolster this case, as a court may be more willing to find that an AI developer had the required mental state for defamation when it knows that a chatbot is producing false statements and fails to take any action to correct them. The case could have implications beyond chatbots, as AI also enables the creation of potentially defamatory deepfakes and even “replica” bots of other people.
Starbuck’s complaint quotes extensively from text generated by Google’s AI chatbots. While the core of his argument is that the AI produces false information, even inventing entire articles when asked for proof of its claims, Starbuck also cites statements by the bot as though they are admissions by Google itself. For example, he claims that “Gemini admitted” that the false statements resulted from a “deliberate, engineered bias designed to damage the reputation of individuals with whom Google executives disagree politically.” If a court were to accept Starbuck’s arguments, it could also significantly raise the stakes for companies attempting to respond to chatbot “hallucinations.”
This was one of several lawsuits this month targeting harassment and emotional harm from AI products. Also notable were a lawsuit against an AI app that “removes” clothes from photos, arising from the first reported case of AI-based nonconsensual intimate imagery in schools, and a lawsuit filed by the City of New York targeting social media addiction.
Apple loses first big tech case at UK’s Competition Appeal Tribunal and other global tech news
The UK Competition Appeal Tribunal (CAT) ruled that Apple abused its market dominance by charging excessive commissions to app developers on the App Store, with 50% of those overcharges passed on to consumers. The decision could expose Apple to hundreds of millions of pounds in damages.
This is the first mass lawsuit against a tech giant to reach trial under the UK’s 10-year-old class action-style regime created by the Consumer Rights Act 2015. According to Reuters, the new legal regime has seen limited success for consumers thus far. But the Apple decision suggests that the system does have teeth. Several other tech giants have cases pending before the CAT and will undoubtedly take note of this outcome. Google, Amazon (facing actions by consumers and third-party sellers), and Microsoft are all currently litigating before the tribunal.
Globally, it was an active month for tech litigation. Antitrust suits proceeded against Apple in China and Google in Sweden. Australia’s Competition and Consumer Commission brought deceptive marketing-style claims against Microsoft for alleged dark patterns pushing consumers into paying for AI tools. In India, the Karnataka high court upheld government authority to issue takedown orders to digital platforms (prompting free expression concerns). And in Europe, the European Commission issued preliminary findings that TikTok and Meta violated transparency obligations under the Digital Services Act by failing to grant researchers adequate access to public data. TikTok and Meta still have the chance to respond and rectify the alleged violation, but if the preliminary findings are ultimately confirmed, the companies could face fines of up to 6% of their total worldwide annual turnover.
Spotlight on OpenAI as a litigant
OpenAI appeared in multiple litigation stories this month.
First, a federal judge terminated a preservation order that required OpenAI to save all ChatGPT outputs that would otherwise be deleted. The order came out of a suit against OpenAI by the New York Times alleging that ChatGPT violated copyright law by training its model on articles from the Times. OpenAI had argued that the preservation order was an “overreach” that endangered users’ privacy.
The new order only requires OpenAI to keep the ChatGPT outputs that have already been preserved and to preserve future outputs associated with ChatGPT accounts specifically identified by the New York Times. Previously preserved outputs originating in the EU, UK, and Switzerland were excluded from the order to avoid conflicts with foreign privacy laws. OpenAI has indicated that it continues to object to the preservation obligations and will seek to avoid handing over the information it is preserving. As more chatbot lawsuits reach discovery, the scope of required data preservation will continue to be a significant issue.
OpenAI has also made headlines this month for its own far-reaching discovery requests in a suit from parents alleging that ChatGPT contributed to their 16-year-old’s suicide. The company requested personal information about the boy’s funeral, including “all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given.” (TJLP co-filed the complaint in this case, Raine v. OpenAI, but is not further involved in the matter; the filing was covered in last month’s roundup.)
Additionally, NBC News reported that seven different nonprofit organizations that had signed open letters or petitions critical of OpenAI received expansive subpoenas from the company. The subpoenas were nominally associated with litigation between Elon Musk and OpenAI, but the nonprofits are unconnected to Musk, they said. They were asked for broad information about their funding and donors, as well as “all documents and communications concerning the governance or organizational structure of OpenAI.” NBC News quoted Robert Weissman, co-president of consumer advocacy group Public Citizen, as saying that OpenAI’s tactics were “100% intended to intimidate.”
Authors


