Home

Donate
Analysis

December 2025 Tech Litigation Roundup

Madeline Batt, Melodi Dinçer / Jan 14, 2026

Madeline Batt is the Legal Fellow for the Tech Justice Law Project. Melodi Dinçer is Senior Staff Attorney for the Tech Justice Law Project.

Year Ends with a Bang, Not a Whisper: EU Issues First DSA Enforcement Fine, Prompting US Retaliation; OpenAI Sued by Victims of Murder-Suicide; Free Speech Challenge to ICEBlock Ban

The Tech Litigation Roundup gathers and briefly analyzes notable lawsuits and court decisions across a variety of tech-and-law issues. This month’s roundup features the following cases:

Related litigation is linked throughout the Roundup.

TJLP would love to hear from you on how this roundup could be most helpful in your work – please contact us with your thoughts.

Transatlantic tensions flare in court as X faces first-ever DSA fine

The European Commission imposed a long-awaited fine of €120M on X for violating the Digital Services Act (DSA). The fine is the first ever imposed under the DSA, Europe's comprehensive 2022 tech regulation.

The violations relate primarily to the DSA's transparency requirements. The Commission found that X failed to maintain an adequate ad transparency repository, that it restricted researcher access to data, and that its “verified” blue check mark symbol––which users can pay for without any verification process––unlawfully deceives users. X also remains under investigation for more politically sensitive violations of the DSA’s risk management obligations related to content moderation and hate speech. The company now has 60 working days to respond on how it will address the deceptive use of blue checkmarks and 90 working days to submit an action plan to resolve noncompliance related to its ad repository and research data access. If the company does not comply, it may be subject to periodic penalties; the DSA authorizes penalty payments of up to 5% average daily worldwide turnover per day of noncompliance.

Several US government officials responded with criticisms and retaliatory measures. Figures such as Vice President J.D. Vance and Secretary of State Marco Rubio decried the enforcement action as an assault on Americans' digital expression, even as commentators argued that the transparency rules X violated are unrelated to speech –– or, alternatively, enhance expressive freedoms. Despite professing a commitment to free speech, US officials then announced that the federal government had barred several foreign experts who have expressed support for tech regulation from entering the country.

One of those experts, Imran Ahmed, the British head of the Center for Countering Digital Hate (CCDH), is a lawful permanent resident who lives in the US with his wife and child, both US citizens. The announcement put Ahmed at immediate risk of arrest, imprisonment, and deportation. At midnight on Christmas Day, a judge issued a temporary restraining order preventing the US government from detaining or deporting Ahmed. The preliminary decision protects Ahmed for 14 days. Litigation will continue as the Trump Administration leverages immigration enforcement to chill viewpoints it opposes, actions that advocates argue are blatantly unconstitutional.

EU rules online marketplaces are responsible for misuse of personal data when hosting third-party ads

Despite the fraught foreign policy environment, the EU’s top court has continued to develop the region’s regulatory regime. In a case against the Romanian company Russmedia, the Court of Justice of the European Union ruled that online marketplaces are responsible for the misuse of personal data in advertisements posted on their sites. Reasoning that online marketplaces are “data controllers” under the EU’s General Data Protection Regulation (GDPR), the Court held that marketplaces must take measures to preemptively identify advertisements containing sensitive personal data, verify the advertisers’ identities to confirm the personal data is their own, and refuse to publish advertisements containing third parties’ sensitive data unless advertisers can prove they obtained explicit consent.

Reactions to the decision have varied. Some commentators celebrated the expanded user protections, while others said t the EU “upend[ed] the Internet” to the detriment of free expression. Speaking on Tech Policy Press’s podcast, Joris van Hoboken of the DSA Observatory suggested that the decision has some flaws but is unlikely to be interpreted as broadly as its staunchest critics fear. Nonetheless, the case––binding on all EU member states––further highlights the divide between the US and Europe on tech regulation. In the US, Congress has declined opportunities to enact stronger consumer data protections, and Section 230 of the Communications Decency Act, which prevents platforms from being held liable as publishers for content they host, could prevent a plaintiff from suing a marketplace for an advertisement that used her personal information without consent.

Lawsuit from family of murder-suicide victim raises new theory of chatbot liability

In our monthly roundups, we have covered the growing tide of lawsuits against AI chatbot companies for harm to users, from delusional disorders to death by suicide. This month, OpenAI faces its first lawsuit based on a homicide. The lawsuit is filed not by a ChatGPT user, but by the estate of the user's victim.

Stein-Erik Soelberg, the lawsuit alleges, experienced paranoid delusions that ChatGPT exacerbated and directed at his mother and other targets. According to the complaint, the product convinced Soelberg that his 83-year-old mother, Suzanne Adams, was surveilling him and likely involved in a plot to assassinate him. Soelberg then beat and strangled his mother to death and fatally stabbed himself. Adams’s estate now argues that ChatGPT 4o was defectively and negligently designed in a manner that predictably pushed a vulnerable man into crisis, with tragic results.

The claims in the lawsuit are familiar from past chatbot actions, including strict product liability, negligence, and unfair business practices. What sets this case apart is that the victim did not herself use ChatGPT. Her estate must therefore show it was foreseeable that ChatGPT's alleged defects would cause harm not only to its users but also to bystanders like Adams as its users' delusions moved offline. With reports of chatbot-induced offline violence repeatedly making headlines, lawsuits filed by people who are victimized by ChatGPT users may prove to be the next frontier of chatbot tort litigation. In addition to the case by Adams’s estate, OpenAI faces a lawsuit by the estate of Soelberg, based on his death by suicide after the killing. That suit suggests the novel theory that someone who commits an offline crime may also be a victim of ChatGPT-based manipulation.

Social media and dating app companies also faced lawsuits involving offline harm this month. Meta was sued by two families who argue that Instagram's design facilitated the sextortion schemes that drove their minor sons to suicide. Their wrongful death action alleges that Instagram recommended the accounts of sexual predators to children and failed to implement safeguards against abuse, such as defaulting minors into private accounts. Meanwhile, the dating app company Match Group is defending strict product liability and negligence claims by women who allege that the company allowed known serial rapists to remain on their platforms. In addition to the company's failure to respond to reports of sexual violence, the complaint points out that certain platform features facilitate repeated sexual violence. The complaint also says the apps provide no way to report a rapist who unmatches their victim after the assault, for example, and product testing suggests that banned accounts can easily rejoin the apps.

ICEBlock sues over its removal from app store

In October, US Attorney General Pam Bondi told Fox News she had demanded the removal of ICEBlock, an app that allows users to report the location of immigration enforcement agents, from Apple's app store. Apple complied, prompting criticism that the company was facilitating state repression. ICEBlock’s founder, Joshua Aaron, filed a suit against Trump Administration officials in December, arguing that the takedown violated his First Amendment rights.

Prior to its removal, ICEBlock provided crowdsourced alerts that enabled users to document immigration enforcement or protest the Trump Administration's immigration policies, including prolonged detention, inhumane conditions, arresting and imprisoning lawful permanent residents, racial profiling, deportation to war zones, and torture, among others. In response, Attorney General Bondi and others in the administration argued that disclosing the location of agents put their safety at risk. Officials used the same argument to defend ICE Officers’ practice of wearing masks and refusing to identify themselves during arrests.

Apple has previously removed apps at the request of governments in China and Russia. Prior to Trump's inauguration, it had not done so at the behest of the US government. The case could clarify whether and when Americans' digital expression is protected from suppression by tech companies acting at the government's direction. This question has heightened urgency in light of the cozy relationship between the Trump Administration and the US tech industry.

To prevail, ICEBlock’s Aaron will need to distinguish his case from Murthy v. Missouri. In Murthy, the Supreme Court held that users who alleged that Biden Administration officials coerced social media platforms into taking down their posts containing COVID-19 disinformation lacked standing to sue the government. Bondi’s boast to Fox News may provide the distinction Aaron needs. In Murthy, the plaintiffs could not show that any specific social media post was blocked due to government interference; here, the Trump Administration has openly claimed credit for ICEBlock’s takedown.

Privacy lawsuits target surveillance in the home and online

Several lawsuits this December targeted technologies that allegedly collect and exploit data about people in their homes. San Francisco Tenants Union and three individual tenants sued the major landlord Equity Residential (EQR) and several subsidiaries, along with the company SmartRent, which they allege creates digital surveillance technology installed by landlords to monitor tenants. According to the complaint, so-called "Smart Home" features such as keyless entry systems, Wi-Fi-connected thermostats, and location-tracking smartphone applications provide SmartRent with a detailed picture of tenants' movements. EQR allegedly exploits this data in combination with other tenant-tracking, including Internet browsing data collection, to analyze tenants' behavior for the company's profit.

The lawsuit argues that EQR and SmartRent are violating the right to privacy enshrined in the California state constitution. Unlike federal privacy rights, this California-based right applies to private companies as well as the government. As the complaint details, both California and federal courts have recognized that individuals' privacy interests in their homes are especially strong. In addition to the state constitutional claim, the causes of action include the privacy tort intrusion upon seclusion and breach of the covenant of quiet enjoyment, an implied duty owed by landlords to tenants under modern landlord-tenant law.

State attorneys general in Texas and Nebraska also sued technology companies that they alleged are engaged in in-home surveillance of state residents. In Texas, Attorney General Ken Paxton sued several Smart TV manufacturers for allegedly surveilling everything consumers watch using Automatic Content Recognition technology and exploiting the data for targeted advertising without informed consent. In Nebraska, Attorney General Mike Hilgers sued Resideo, a company that sells home security systems, for marketing video surveillance cameras as "secure" despite alleged security vulnerabilities in Chinese-manufactured equipment. Both lawsuits rely on state consumer protection laws against deceptive and unfair business practices.

A lawsuit in Illinois challenges a different kind of surveillance: the collection of biometric data by AI assistants on video calls. According to the complaint, firefly.AI––one of several companies that make AI tools for note-taking on Zoom calls and other virtual meeting software––extracts and retains identifiable voice data for all meeting participants. The company allegedly collects biometric data even from meeting participants who provide no consent of any kind to the AI tool. Plaintiffs argue that this violates the Illinois Biometric Information Privacy Act, a state law that protects consumers against misuse of their biometric data.

Texas age verification law enjoined as social media litigation continues

Back in October, Texas was sued after enacting age verification and parental consent requirements on access to digital applications. This month, a court enjoined the legislation, concluding that it violated the First Amendment. The court found that the law was subject to strict scrutiny because it regulates speech based on its content. The Texas law could not meet that high bar, the court found. Requiring age verification and parental consent to download nearly all applications is “so exceedingly overbroad,” the court held, that Texas could show neither a compelling state interest nor narrow tailoring, the two requirements to pass strict scrutiny. The court noted that the sweeping law would fail the less stringent constitutional test of intermediate scrutiny as well.

As Texas’s regulatory effort at child online safety hit a stumbling block, two Attorneys General filed new lawsuits to address social media harm. The US Virgin Islands sued Meta for deceptive and unconscionable trade practices, alleging that Meta intentionally profited by fostering youth addiction to its platforms and facilitating scams. According to the complaint, Meta makes billions of dollars by connecting fraudsters to users who are most likely to fall for scams; allegedly, the company even earns extra money on scam ads by charging a “premium” for advertising it identifies as likely fraudulent.

Meanwhile, Hawaii sued TikTok for deceptive and unfair practices associated with alleged addictive design features and harm to children. The lawsuit adds to the plethora of actions by state AGs against platforms concerned with social media addiction and minors’ mental health; a similar action by Massachusetts against Meta was argued before Massachusetts’s highest court this month. Reports from the hearing suggest that Justices appeared receptive to the state’s contention that addictive design features are not protected by the First Amendment or Section 230 (TJLP co-filed an amicus brief supporting Massachusetts’s position).

Authors

Madeline Batt
Madeline Batt (she/her) is the 2025-26 Legal Fellow at Tech Justice Law Project. She approaches tech accountability from a background in civil rights and immigrant justice movement lawyering. She has experience leveraging litigation and advocacy to resist the use of technology to surveil and disempo...
Melodi Dinçer
Melodi (she/her/ella) is Policy Counsel for the Tech Justice Law Project. She is a tech justice lawyer with expertise in data privacy, “AI” policy, and biometric surveillance. Her critical approach explores how legal and political institutions enable corporate technologies to target marginalized com...

Related

Analysis
November 2025 Tech Litigation RoundupDecember 10, 2025
Analysis
October 2025 Tech Litigation RoundupNovember 14, 2025
Analysis
July 2025 Tech Litigation RoundupAugust 8, 2025

Topics