July 2025 Tech Litigation Roundup
Melodi Dinçer / Aug 8, 2025Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.

Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0
July’s Litigation Landscape: An important ruling on generative AI voice cloning, a mixed bag of Section 230 decisions, and some “closure” on the Cambridge Analytica data scandal
This Roundup gathers and briefly analyzes notable lawsuits and court decisions across a variety of tech-and-law issues. The Tech Justice Law Project (TJLP) tracks these and other tech-related cases in US, federal, state, and international courts in this regularly updated litigation tracker.
If you would like to learn more about new cases and developments directly from the people involved, TJLP hosts a regular tech litigation webinar series! Keep an eye out for future webinar announcements at the beginning of these roundup articles. Please fill out this short survey if you would like to stay updated on the webinars, have ideas for topics we should cover, or have any other suggestions for this resource. You can keep up with TJLP’s work also by signing up for our mailing list here (we will not spam you—we promise!).
This month’s Roundup covers updates in the following cases:
- Lehrman et al. v. Lovo, Inc. (S.D.N.Y. Case No. 1:24-cv-03770-JPO) – A federal court allowed a lawsuit brought by voice actors to proceed against a company’s generative AI voice-over product, dismissing their trademark and copyright claims while upholding their consumer protection and novel “digital replica” claims.
- New Hampshire v. TikTok, Inc. (New Hampshire Superior Court, Merrimack County Case No. 217-2024-CV-0039) – A New Hampshire trial court allowed a lawsuit brought by the state AG to proceed against TikTok on defective design claims, finding Section 230 did not bar them, dismissing only one claim alleging TikTok deceived users about the geographic origin of its products (i.e., China).
- Patterson v. Meta Platforms, Inc. (N.Y. App. Div. 4th Dept. 2025 Slip. Op. 04447) – A New York appeals court in a 3-2 decision, held that Section 230 blocks products liability claims concerning social media platforms’ content-recommendation algorithms, which allegedly amplified white supremacist content to the 2022 Buffalo shooter who killed 13 people.
- Doe v. Roblox Corp. and Discord Inc. (N.D. Cal. Case No. 3:25-cv-05753-LB) – A new suit was filed against Roblox and Discord, raising several products liability claims after the products facilitated a teenager’s sexual assault by a man posing as a teenager on Roblox.
- Frank et al. v. Google LLC (Uganda Personal Data Protection Office [PDPO] Complaint No: 08/11/24/6683) – The Uganda PDPO issued a decision finding Google violated Uganda’s Data Protection and Privacy Act by failing to register as a data controller or processor and transferring Ugandan personal data abroad without meeting the law’s required safeguards.
- Kentucky v. PDD Holdings, Inc. F/K/A Pinduoduo Inc et al. (Commonwealth of Kentucky, Woodford Circuit Court Case No. 25-CI-00232) – The Kentucky AG filed a lawsuit against Chinese online retailer Temu for various alleged data theft and consumer fraud violations.
- Custom Communications, Inc. d/b/a Custom Alarm v. FTC (8th Cir. No. 24-3137) – The US Court of Appeals for the Eighth Circuit vacated the FTC’s Click to Cancel Rule after finding the rulemaking process was procedurally deficient.
- In re: Facebook, Inc. Securities Litigation (N.D. Cal. Case No. 5:18-cv-01725-EJD) – Meta Shareholders settled their lawsuit against Meta, Mark Zuckerberg, Sheryl Sandberg, and Dave Wehner for $8 billion, one day after the trial started (and before several high-profile figures were set to testify, including Peter Thiel and Marc Andreessen).
As copyright violation claims miss, a federal court allows novel “digital replica” claim to proceed against generative AI company
Last month, two federal courts in San Francisco rejected artists’ claims that copying and using their artworks to train generative AI models—without permission or compensation—violated their copyrights. This month, another federal judge reached a similar conclusion on copyright law but allowed a similar lawsuit to continue on a novel claim: that cloning someone’s voice and selling it through a generative AI product without permission or compensation may violate their civil rights.
In Lehrman v. Lovo, a group of voice actors sued AI startup Lovo after they were hired to record their voices solely for internal and academic research. The plaintiffs allege that the company then used the recordings to train its generative AI voice generator product, creating exact replicas of their voices that Lovo then sold to its customers for commercial use, including in podcasts and advertising. The voice actors’ suit alleges Lovo violated several laws, including federal copyright and trademark laws, New York consumer protection laws, breach of contract, and misappropriation of identity. They contend that the voice replicas were identical to their real voices, which are central to their profession as voice actors, and they did not consent to such use, nor receive compensation.
Lovo filed a motion to dismiss the case. On July 10, the federal judge granted Lovo’s motion concerning the actors’ federal trademark and copyright law claims. However, the judge denied the motion for their other claims, including a novel claim under the New York Civil Rights Law that Lovo misappropriated their voices without consent or compensation.
Several states protect individuals’ names, voices, images, and other aspects of identity from being taken by others and used for commercial purposes. These laws, often referred to as a right of publicity or misappropriation of likeness, originated in the early 20th century, when the spread of photography made the unauthorized use of people’s images more commonplace. As digital technologies, including generative AI products, make it easier to replicate someone’s image or identity, plaintiffs have dusted off these century-old legal protections, including to combat nonconsensual digital “replicas.” The New York law, in this case, was updated in 2021 to explicitly cover digital replicas of deceased people.
In this case, Lovo argued that the 2021 amendment, which applies to deceased people, does not also apply to living people. The Court rejected this argument for a few notable reasons. First, in passing the amendment, the New York Legislature confirmed that the change was needed because the original law already protected living people from digital replicas. Next, a digital replica was not limited to just a visual depiction of someone’s image but also included their voice. Finally, and most importantly, the Court found that the law must be flexible enough to apply to new technologies. The judge noted that generative AI-facilitated exploitation could be “even more pernicious, because, allegedly, a functioning voice clone capable of saying anything, forever, can be created using a small snippet of original audio.” And since Lovo allegedly used the actors’ voice recordings for profit, the Court found that the company could not rely on First Amendment protections for newsworthy use or those in the public’s interest.
This decision also allowed the actors’ consumer protection claims to proceed, making this case one to watch for following AI-facilitated exploitation of identity and the potential legal remedies available.
State courts grapple with Section 230 in tech harm cases in New Hampshire and New York – As California considers a new one
This month, state courts in New Hampshire and New York reached opposite conclusions when deciding whether Section 230 of the Communications Act of 1934 barred claims seeking liability for distinct tech harms, including alleged addictive design features in the New Hampshire case and algorithms amplifying white supremacist content in New York. The differing outcomes in the cases point to a general confusion surrounding Section 230’s applicability to cases where harms arise from tech products, but this uncertainty hasn’t stopped advocates from filing new suits when they believe such products cause offline harm, with a new case filed early this month in California federal court.
On July 8, a New Hampshire trial court held that Section 230 did not protect TikTok from the state attorney general’s (AG) products liability lawsuit. The AG alleged that TikTok designed its popular app in a defective manner, inducing teenagers to develop dependencies on the app to their social, mental, and physical detriment. The complaint identified several addictive features as harmful to teens, including recommender systems based on personalized algorithms, infinite scroll, push notifications, visual filters and effects, and the currency system used in TikTok LIVE, among others. Several of these design features have been challenged by other state AGs in similar lawsuits where courts are grappling with Section 230, most notably the ongoing multistate case against Meta currently on appeal before the Ninth Circuit.
The trial court affirmed that TikTok is a product that can be subject to products liability claims, despite the service’s digital nature. This is a legally significant distinction because products liability traditionally does not apply to services like servicing a car or waiting a table, but only to physical products like cigarettes and airbags. Although tech companies have long argued that their products are merely software services, governments and courts are increasingly rejecting this view and recognizing that digital products are, in fact, products.
The court also rejected TikTok’s Section 230 and First Amendment arguments, which are common defenses that, if successful, can immunize tech companies from liability. It found that TikTok had a duty to design a “reasonably safe product,” which the State alleges it failed to meet. That duty “is independent” from TikTok’s “role as a publisher of third-party content,” so Section 230 did not extend to cover the design of TikTok’s product features. The court also found that the First Amendment did not shield TikTok from potential liability, noting that the “thrust” of the AG’s claims was the alleged harm caused by the addictive features themselves “not the third-party content disseminated” through them.
Several weeks later, on July 25, an intermediary appeals court in New York state issued a significant ruling on the other side of the Section 230 debate. This case was brought by survivors and family members of victims of the racially motivated 2022 mass shooting of a grocery store in a Black neighborhood in Buffalo by a white supremacist. The defendants include the shooter’s parents and others, but the appeal focused on claims brought against the “social media defendants”: Meta, Instagram, Snap, Alphabet, Google, YouTube, Discord, Reddit, Twitch, Amazon, and 4chan.
The lawsuit raised several legal claims, including products liability claims based on defective design, specifically alleging that these platforms’ content-recommendation algorithms “fed a steady stream of racist and violent content” to the shooter and addicted him to their platforms, leading to his isolation and further radicalization. Although a few of the platforms like 4chan, Discord, Twitch, and Snap do not use such algorithms, the plaintiffs still claimed their business models revolve around addicting users, including the shooter in this case. The companies argued that Section 230 shielded them from liability for third-party users’ racist content, which was what radicalized the shooter. The trial court disagreed, finding the plaintiffs’ claims concerned the companies’ own conduct in designing their platforms’ features without regard to third-party content.
In a 3-2 ruling, the appeals court overturned the trial court ruling and held that Section 230 required dismissal of the case. The majority concluded that the central thrust of these claims concerned the companies’ role as publishers of third-party racist content, and the use of content-recommendation algorithms by some defendants did not alter this status as mere publishers, since such algorithms are “simply tools” to facilitate publishing functions. The court further stated that, even if Section 230 did not shield the companies for this content in this instance, the conduct was likely protected by the First Amendment under the Supreme Court’s nonbinding dicta in Moody that determining which third-party content to post and how to arrange it for users’ viewing “is expressive activity on its own.” Ultimately, the majority observed that the “interplay” between Section 230 and the First Amendment in such cases “gives rise to a ‘Heads I Win, Tails You Lose’ proposition in favor” of the tech company defendants.
The dissenting opinion from the appeals court begins with a direct quote from the shooter: “[W]hy do I always have trouble putting my phone down at night? … It’s 2 in the morning…I should be sleeping…I’m a literal addict to my phone[.] I can’t stop cons[u]ming.” The dissenters emphasize that the plaintiffs’ claims go beyond the extremist third-party content the shooter consumed on these platforms to the “inherent nature” of the platforms’ design, which they argue can “prey” on the developmental insecurities of teens like the shooter and drive them to constantly consume evermore “engaging” content, including hateful content. They also expressed concern that the majority’s “vast expansion of First Amendment jurisprudence” could effectively preclude state tort liability in contexts such as failing to warn about potential product risks or failing to obtain informed consent in medical malpractice, as defendants in those situations could claim an absolute First Amendment defense. The dissenters cautioned that such a broad interpretation could create significant risks to public safety, in addition to the dangers of radicalizing online content contributing to offline violence.
These conflicting decisions involve similar claims concerning tech products alleged to be designed in harmful ways, making it difficult for litigants to predict if and how to bring similar claims. This doctrinal uncertainty has not stopped litigants from trying anew, however. On July 9, a lawsuit was filed in California federal court raising several products liability claims concerning Roblox and Discord. The case involves a teenage girl who alleges she initially met a man on Roblox posing as a teenage boy, developed an online relationship with him, moved communications to Discord, and eventually met him in person, where he attempted to rape her. The complaint describes both platforms as providing a “hunting ground for child-sex predators,” and joins a wave of similar lawsuits against Roblox alleging similar harms. While the complaint raises familiar defective design claims, it remains to be seen whether Section 230 will shield these companies, as in other cases where victims of offline sexual assault sought to hold platforms facilitating the abuse accountable.
Google found violating foreign data privacy laws, while Temu sued for violating state data privacy laws
On July 18, two notable developments occurred in data privacy litigation: an overseas ruling against Google and a Kentucky-based lawsuit against Temu.
In Uganda, the country’s Personal Data Protection Office (PDPO) ruled that Google violated Uganda’s Data Protection and Privacy Act, Cap 97, and its Regulations. The PDPO found that Google qualified as both a data controller and data collector under the law by collecting personal data from users in Uganda and by determining the purposes and means of such data processing. The PDPO further found that the law requires data controllers to register with the PDPO, and Google failed to do so.
The decision emphasized Google’s investments in and derivation of value from Uganda’s user market, establishing a commercial presence that requires compliance with Uganda’s laws. Because Google did not demonstrate compliance with the cross-border data transfer requirements under Ugandan law, it was found in violation of those safeguards. The PDPO ordered Google to register within 30 days of the decision and to submit its compliance framework for international data transfers of Ugandans’ personal data.
Jumping from Kampala to Kentucky, on the same day Kentucky’s AG filed a new lawsuit against Temu, a popular Chinese online shopping platform company, alleging various violations of the Kentucky Consumer Protection Act (KCPA) and Kentucky common law. The complaint follows an investigation by the office into the app’s design and data practices. It alleges that code-level “behaviors” in the Temu app collect Kentuckians’ sensitive information without knowledge or consent, which alone violates the KCPA. Compounding this according to the complaint, however, is the fact that the company’s operations are partially located in China, where “cybersecurity laws allow the [Chinese] government unfettered access to data owned by Chinese businesses.”
In addition, the complaint asserts several “traditional consumer deception” claims, alleging that Temu sells products to Kentuckians in ways that violate the KCPA. It specifically claims that the Chinese government is using Kentuckians’ data to infringe on the intellectual property of American companies, including by listing several unlicensed products marketed as being from Kentucky brands. Other alleged practices include false advertising, fake customer reviews, and using consumer payment information to order items without consent. The lawsuit, while framed as a traditional consumer and data protection action, also reads as an attempt at geopolitical signaling, reminiscent of the Supreme Court’s TikTok decision this January.
Through procedural deficiencies and voluntary settlements, tech companies avoid accountability
July proved to be a favorable month for tech companies seeking to avoid certain legal consequences, including those involving obligations to their own shareholders.
On July 15, the Eighth Circuit Court of Appeals vacated the Federal Trade Commission’s (FTC) 2024 amendment to a decades-old rule that barred sellers from misrepresenting material facts and required them to provide consumers a simple cancellation mechanism for subscriptions. Known as the “Click to Cancel” rule, it was meant to combat the growing problem of recurring online subscription plans, which can keep consumers paying for unwanted products and services when they neglect, forget, or cannot figure out how to cancel subscriptions. Before the rule was amended, sellers could interpret a consumer’s inaction as a sign of willingness to continue a subscription. The Click to Cancel rule required sellers to end subscription plans unless the consumer proactively “clicked” to indicate acceptance of the recurring charge.
The Eighth Circuit reviewed both the rule change itself and, more importantly, the process the FTC undertook to draft, adopt, and publicize the change, as required by the Administrative Procedure Act (APA). Various industry groups and businesses had challenged the rule in four federal circuit courts, arguing that the FTC exceeded its authority under the FTC Act during the rulemaking process and acted arbitrarily under the APA. After examining the rulemaking process in detail, the court concluded that the FTC’s “procedural deficiencies…are fatal here” and required vacating the rule in its entirety. Of some consolation to consumers, Commissioner Rebecca Slaughter was reinstated on July 17 after prevailing in her lawsuit challenging her termination by the President. In announcing her return, she stated that the first thing on her to-do list was to “call[] a vote on restoring the Click to Cancel Rule.”
On July 17, a group of shareholders settled their years-long case against Meta, alleging the company made misleading statements and failed to disclose the extent of then-Facebook’s data sharing after a whistleblower revealed the Cambridge Analytica scandal in March 2018. The shareholders’ suit followed the FTC’s $5 billion 2019 settlement over Facebook’s violations of the FTC’s 2012 consent order, which this scandal unearthed. That same year, Facebook also reached a $100 million settlement with the SEC for misleading investors.
In their suit, the shareholders relied on the FTC’s findings that Facebook violated the 2012 consent order, alleging that Facebook and key executives, including Mark Zuckerberg and Sheryl Sandberg, failed to disclose the extent of Facebook’s noncompliance and its permissive user data access policies. Those policies allowed a single individual to collect tens of millions of users’ data through a Facebook quiz app, then sell it to a political consulting firm, Cambridge Analytica, which then used it to influence political campaigns and elections of several high-profile politicians globally. The shareholders sought reimbursement for the FTC fine and other legal costs, estimating the total at more than $8 billion. Meta, in its defense, argued that it had also been deceived by Cambridge Analytica’s data collection and had been making its best effort to comply with the FTC order.
Notably, the shareholders settled their lawsuit the day after the trial began, just before several key figures were set to testify, including Zuckerberg, Sandberg, Marc Andreessen, and Peter Thiel. Although the complaint was filed in October 2018, the trial did not begin until July 16, 2025, in part due to an appeal that the Supreme Court briefly considered, hearing oral arguments in November 2024, before remanding the case back to the trial court (TJLP filed an amicus brief supporting Facebook investors). After six years of amending complaints and waiting, the shareholders got their $8 billion – but at the cost of greater public transparency around what Facebook did and did not know, did and did not do, regarding Cambridge Analytica’s access to users’ data, information that only became public due to whistleblower Chris Wylie.
Authors
