February 2025 Tech Litigation Roundup
Melodi Dinçer / Mar 6, 2025Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.
February’s Legal Landscape: AI Copyright Infringements, Social Media Settlements, and DOGE Data Grabs
Despite being the shortest month of the year—and Black History Month in the US—February brought several new developments in tech litigation. This roundup gathers and briefly analyzes key legal issues from the past month, including AI training data and copyright violations, social media companies settling over transparency laws, the first lawsuit brought under Washington’s My Health My Data Act, about a dozen new lawsuits filed against DOGE for accessing federal government data systems, an important Ninth Circuit decision on Section 230, tech antitrust actions against Google/Alphabet and Amazon, among others.
The Tech Justice Law Project (TJLP) tracks these and other tech-related cases in U.S., federal, state, and international courts in this regularly updated litigation tracker. Beyond gathering these updates, the roundups will also feature educational opportunities for readers to deepen their understanding of new cases and legal developments in the tech policy space.
To that end, we invite you to join TJLP for a webinar in March that will explore the ongoing Renderos v. Clearview AI lawsuit brought by activists alleging various privacy violations by the notorious facial recognition company, featuring Plaintiffs’ Counsel Sejal Zota, Co-Founder and Legal Director of Just Futures Law.
If you are interested in attending the webinar, please share your contact information here to receive further information about the event.
Read on to learn more about February developments in US tech litigation.
Recent AI Copyright Decisions May Impact Ongoing GenAI Copyright Lawsuits
This month brought important decisions in the growing number of cases alleging copyright infringement in the development of AI systems, including whether scraping information for training models constitutes fair use or violates the Digital Millennium Copyright Act (DMCA). These rulings will likely influence the bevy of copyright claims brought against generative AI (GenAI) companies over the past couple of years.
First, a judge for the US District Court for the District of Delaware partially granted and partially denied summary judgment in Thomson Reuters v. Ross Intelligence, a case filed back in March 2020 - predating the GenAI hype cycle - which strikes at the core of scraping proprietary data to train AI models. Thomson Reuters, a global media corporation that runs the Westlaw legal research platform, accused ROSS Intelligence of infringing on its copyrighted materials when it used Westlaw's proprietary headnotes and Key Number System to develop a competing, AI-driven legal research tool.
After Thomson Reuters had denied ROSS a license to use Westlaw's content to train its machine-learning system, ROSS engaged a third-party company to create "Bulk Memos," which were derived from Westlaw's materials and subsequently used to train ROSS’s models. The court found that ROSS's actions constituted direct copyright infringement and rejected their fair use defense, emphasizing that ROSS's use was commercial and non-transformative, as it aimed to develop a competing product in the legal research market. This decision suggests that AI developers may have difficulty convincing judges that using copyrighted materials for training purposes without authorization constitutes fair use—an issue that the US Copyright Office may also weigh in on soon.
A similar debate over AI and copyright is unfolding in Intercept Media Inc. v. OpenAI Inc., where The Intercept alleges that OpenAI intentionally removed copyright management information (like article titles and author names) from its news content to scrape the underlying text and train OpenAI’s models, in violation of the DMCA. The judge had denied OpenAI’s motion to dismiss the lawsuit partially in November, promising to release a full opinion explaining his reasoning later.
That opinion came this month. The judge found multiple reasons to allow the Intercept’s DMCA claims to move forward, including the news outlet’s identification of specific training sets OpenAI allegedly used to train ChatGPT and further evidence demonstrating how OpenAI used algorithms in training to extract text from The Intercept’s article that removed author information and other attribution information. OpenAI had argued that The Intercept did not have enough information about OpenAI’s internal data practices to support their DMCA claims, but the judge was unpersuaded. He criticized “OpenAI’s secrecy over the contents of the [data]sets used to train the latest versions of ChatGPT” and found that despite this secrecy, The Intercept had presented enough circumstantial evidence to proceed with their claims at this stage. This ruling signals at least one court was willing to allow copyright claims to proceed, even when AI companies refuse to disclose their training datasets.
X Corp. Settles with California AG Over Social Media Transparency Measures
As previewed in last month’s roundup, X Corp. and California’s Attorney General (AG) Rob Bonta have reached a settlement agreement in X Corp.’s lawsuit challenging California’s AB 587 on First Amendment grounds. Among other things, the law required large social media companies to disclose their content moderation and hate speech policies in a biannual report submitted to the AG. According to their proposed settlement agreement, these specific requirements violate the First Amendment, and the AG will not be able to enforce those parts of the law.
As a result, social media platforms will no longer need to inform the AG about how they define hateful, extremist, or misleading speech in their internal policies. The court previously found that these categories were subjectively defined by each platform, making the law’s obligation a form of compelled speech by requiring platforms to align with the government’s definitions of hate speech. Further, the AG cannot require platforms to disclose data about how often they flag or remove posts that violate their own content moderation rules.
However, this settlement does not completely undo the law. Platforms must still publicly post their terms of service and notify the AG of any changes every six months. In these reports, companies must also describe how they enforce their terms of service and remove content found to violate the platform’s rules—though they do not need to disclose related enforcement data. Nevertheless, this settlement may have broad implications for current and future laws requiring tech companies to be more transparent.
First Lawsuit Filed Under Washington’s My Health My Data Act Against Amazon
This month, plaintiffs filed the first lawsuit alleging violations of Washington state’s expansive My Health My Data Act. The law covers many types of data that may be reasonably linked with a person seeking healthcare services, requiring companies to get prior consent before collecting and using such data. It was passed in response to the Supreme Court’s overturning Roe v. Wade and aims to protect Washingtonians and anyone visiting the state from being targeted through their data for seeking reproductive healthcare, among other services.
In Maxwell v. Amazon.com, Inc., the class plaintiffs accuse Amazon of violating the law by collecting, using, and selling the precise location information of hundreds of millions of people who have downloaded mobile apps. The complaint details how Amazon licenses its software development kit (SDK)to a variety of mobile apps that then collect individuals’ location information and relay it back to Amazon. Plaintiffs allege that Amazon then retains that information for its own business purposes, including targeted advertising, and sells that data to others.
The lawsuit raises several other claims, including violations of federal wiretapping, state consumer protection laws, and privacy torts. However, the most significant claim centers on the My Health My Data Act. Specifically, the plaintiffs argue that Amazon did not obtain consent before collecting their consumer health data, including biometric and precise location data or other data that could imply a person was trying to access or receive health services. Amazon also did not disclose the categories of health data the SDKs collect, the purpose for collecting and sharing that data, with whom that data is shared, and how affected consumers can withdraw consent from future data collection. These are all explicit requirements under the law, offering a Washington federal court the first opportunity to interpret the act’s broad provisions in a real-world example of data collection linked to healthcare.
DOGE Faces a Dozen New Lawsuits After Accessing Data from Various Agencies
This month, dozens of new lawsuits were filed on behalf of federal workers, students, state AGs, national labor unions, and others seeking to revoke the Department of Government Efficiency (DOGE)’s access to several federal government databases containing sensitive data and personal information of millions of federal workers and US residents. DOGE, a rebranded moniker for the US Digital Service formed under President Obama a decade ago, has seen multiple resignations from staff engineers, data scientists, designers, and product managers since the change.
The lawsuits rely on decades-old laws like the Privacy Act of 1974, which generally prohibits the federal government from disclosing data without consent, and the Computer Fraud and Abuse Act of 1986, which prohibits unauthorized access to certain computer systems, among other legal claims.
These cases are moving fast, as many of them seek immediate legal relief in the form of temporary restraining orders that would force DOGE to stop accessing data immediately.
Some judges have declined to grant these orders, citing a lack of evidence that the Trump administration is misusing sensitive data. However, other judges have ruled in favor of plaintiffs and are blocking DOGE’s access to data for now. At least one judge has ordered Trump administration officials involved in DOGE to testify under oath about their data access practices. The testimony could help shed light on DOGE’s operations and whether they present the risks plaintiffs are alleging in these suits. This testimony may help clarify the role of Elon Musk, who is presumed to be directing the initiative through his designation as a special government employee. As these cases move past the initial injunction phase, more details may emerge about DOGE’s intentions, the scope of its data access, and the security implications of its activities.
In Recent Section 230 Decision, Ninth Circuit Seems to Depart from Its Own Precedents
In a recent decision, the Ninth Circuit seemingly side-stepped its own Section 230 precedents to uphold a 2023 lower court decision dismissing the case of a teenager who sued Grindr after his underage use of the app led to sexual assault. In Doe v. Grindr, the Ninth Circuit held that Section 230 of the Communications Act of 1934 barred the plaintiff’s product liability claims, which alleged Grindr designed the app in a way that enabled minors to interact with adults to their detriment, among other claims.
The panel found that Section 230 immunized Grindr because these claims necessarily implicated Grindr’s role as a publisher of third-party (or users’) content. It reasoned that if Grindr had a legal duty to suppress matches and messages between adults and children on the app, as the plaintiff claimed, then Grindr would have to monitor all users’ content to prevent such interactions. Since Section 230 grants legal immunity to publishers of others’ content on their platforms, that immunity extends to Grindr publishing users’ content on the app, even if it results in children matching and communicating with adults unlawfully. (Grindr’s terms of service restrict access to people 18 and older, but the company does not verify users’ ages, leading to a significant amount of underage users.)
This decision departs from the Ninth Circuit’s own recent rulings on Section 230 by applying an earlier circuit test from Dryoff v. Ultimate Software Group, Inc. to determine that tech companies could evade the claims under Section 230. The Ninth Circuit had found in Lemmon v. Snap, Inc.(2021) that Snap could face the plaintiffs’ product liability claims based on its negligent design of a Snapchat filter that encouraged teens to speed while driving. In Bride v. Yolo Technologies, Inc. (2024), the panel similarly found that while Section 230 immunized certain product liability claims, a platform could still be liable for misrepresenting that its feature to unmask bullies would do so when it did not. Although it did not raise product liability claims, in Vargas et al. v. Facebook, Inc. (2023), the panel further narrowed Section 230’s reach and found that Facebook was not immune from claims alleging that its Ad Platform facilitated discriminatory housing ad targeting.
Beyond contradicting these more recent precedents, this decision also seems to muddle the Circuit’s established three-part test for determining Section 230 immunity by conflating the two last parts: whether their products liability claims (2) treat Grindr here as a publisher (3) of third-party (or users’) content. The panel equated the two, finding that the allegedly defective “features and functions” on the app, which enable adults to solicit minors for sex, broadly facilitated “the communication and content of others.” The court concluded that Grindr is effectively a mere publisher of that third-party content—and cannot be liable for the app features the company provides to users.
This Section 230 confusion makes the recent Grindr decision a strong candidate for en banc review, where the entire Ninth Circuit– 29 judges, the largest of the 13 US Courts of Appeals–could reconsider the decision. The Ninth Circuit has played a key role in shaping Section 230 jurisprudence, so an en banc review could set an important precedent for courts nationwide. Alternatively, the case could end up in the Supreme Court, although the top court has been reluctant to weigh in on Section 230 issues.
Recent Developments in Tech Antitrust Lawsuits
Earlier this month, RealPage filed a motion to dismiss the US Department of Justice’s lawsuit against the company, which was covered in last month’s roundup. RealPage argues the DOJ failed to factually substantiate its claims that RealPage violated the Sherman Act through monopolistic practices. On February 25, the DOJ filed its opposition, providing numerous arguments to support its claims, focusing on RealPage’s collection and use of landlords’ private rent pricing data to inform its pricing software, teaching landlord customers how to use the software and data to set uniform rents, and leveraging this data to capture the market for commercial rent management software.
In another competition-focused lawsuit, edtech company Chegg filed an antitrust lawsuit against Google/Alphabet, calling out the company's practice of requiring online publishers to supply their websites’ content for use in “AI Overviews”—LLM-generated summaries that often appear at the top of search results. These summaries aggregate information from various websites, including Chegg’s, and push users away from their websites (reducing click-through rates that many websites depend on for analytics and revenue). Chegg alleges that Google’s monopoly over online search enables it to exploit publishers' content beyond indexing for search results to include AI summaries, as well as using that content to train the LLMs Google uses to essentially “cannibalize or preempt search referrals.” Because Google does not have viable competitors in the search market, Chegg’s complaint alleges that publishers have no choice but to acquiesce to these additional features, which end up hurting their businesses by reducing online traffic directly to their sites.
Finally, on February 26, a federal court dismissed a lawsuit alleging that Amazon engaged in illegal anticompetitive conduct through its “Buy Box” algorithms. Plaintiffs in the case alleged that the algorithm prominently featured Amazon’s own pricing offers over identical products sold by non-Amazon retailers on the site, a practice that authorities at the Italian Competition Agency had previously flagged for its anticompetitive effects. The federal judge in the case was not persuaded that Amazon’s algorithmic pricing practice violated Washington state’s Consumer Protection Act; because the plaintiffs did not allege a concrete legal harm – as customers navigating these pricing regimes, they were unable to prove that they had overpaid for certain items because of the Buy Box algorithm. However, the court did grant the plaintiffs an opportunity to amend and refile their complaint by the end of March.
Other Developments
- Kids Online Safety Regulation. On February 3, NetChoice filed a new lawsuit against Maryland’s AG, alleging the state’s Age-Appropriate Design Code Act (AADC) violates the First Amendment. Last month’s roundup covered NetChoice’s related challenge to California’s AADC, which, together with numerous other lawsuits filed in various states, reflects the group’s strategy to preemptively challenge laws governing kids’ online safety before they are enforced against tech companies. The case is before the US District Court for the District of Maryland (NetChoice v. Brown, case number: 1:25-cv-00322-RDB).
- Amazon Drivers and Tips. On February 7, the DC Attorney General announced a $3.95 million settlement with Amazon that resolves the District’s lawsuit alleging Amazon misled consumers by assuring them that 100% of tips would go to Amazon Flex delivery drivers when, in fact, the company allegedly diverted much of those tips to reduce Amazon’s labor costs and increase profits. The settlement also requires Amazon to make clear disclosures about how tips are used on both its website and its app when it uses tips for any purpose other than increasing driver compensation directly. The case is before the Superior Court of DC (District of Columbia v. Amazon.com, Inc., et al., case number: 2022-CAB-005698).
- Meta and Discriminatory algorithms. On February 11, the non-profit organization Equal Rights Center filed a lawsuit against Meta, alleging that the company’s ad delivery algorithms target ads for for-profit colleges disproportionately to Black users while sending ads for nonprofit and public colleges and universities to white users. The complaint argues that this alleged disproportionate treatment stems from Meta’s extensive user data collection practices, which collect information about peoples’ ethnicities from various sources (including the ACT college entrance exam’s website through Meta Pixel). It claims that Meta engages in digital redlining, ultimately providing different educational opportunities to people based on race in violation of the DC Human Rights Act and the DC Consumer Protection Procedures Act. The case is before a D.C. trial court (Equal Rights Center v. Meta Platforms, Inc., case number: 2025-CAV-000814).
- Reuters and Privacy. On February 13, a federal court dismissed a class action lawsuit accusing Reuters of collecting IP addresses for all devices visiting the Reuters.com news website and disclosing this information to third-party business partners (including advertisers) in violation of the California Invasion of Privacy Act (CIPA). Relying on the Supreme Court’s influential decision in TransUnion LLC v. Ramirez, the judge found that the plaintiffs did not experience a concrete harm from having their IP addresses shared in this way, finding support from cases where IP addresses alone were deemed not inherently sensitive or private information for a claim under CIPA. The case is before the US District Court for the Southern District of New York (Zhizhi Xu v. Reuters News & Media, Inc., case number: 24-CIV-2466-PAE).
- 6th Circuit’s Net Neutrality Rulings. On February 18, a group of nonprofit organizations asked the US Court of Appeals for the Sixth Circuit to review a 3-judge panel decision en banc. The decision invalidated the Federal Communications Commission's (FCC) recent net neutrality rules, finding that the FCC must treat broadband internet access services as a lightly-regulated “information service” instead of a highly-regulated “telecommunications service” under the Communications Act of 1934. The decision is notable as it is the first time that a court decided how this service should be treated under the Act, without deferring to the FCC’s reasonable interpretations—a result of the 2023 Supreme Court’s decision that shifted interpretive authority away from the federal agencies and to federal courts.
- Brazil and Rumble. On February 22, Brazilian Supreme Court Justice Alexandre de Moraesordered Rumble, a video-sharing platform, to be taken offline in Brazil within 24 hours after the company failed to comply with previous court orders (including removing US-based accounts of a former Bolsonaro support and appointing a legal representative for Brazil, as he did with X in a different case). Rumble, which is headquartered in Canada and the US, along with the Trump Media & Technology Group, responded that the Court had censored Rumble and sued Justice de Moraes in federal court, claiming that his orders infringed on their First Amendment rights. (Trump Media runs the Truth Social platform, which Rumble hosts on its cloud services.) On February 25, the district judge ruled that Rumble and Trump Media did not have to comply with Justice de Moraes’ order for now, finding the order was not served on the companies correctly according to international treaties.
- South Africa and Google. On February 24, South Africa’s Competition Commission recommended that Google compensate the South African news media by up to 500 million rand ($27.29 million) annually for a three to five-year period, all while changing the way the company’s search product functions to increase referral traffic to South African media outlets and share ad revenues more equitably. The recommendation follows the Commission’s 16-month investigation into tech companies’ negative impacts on local news media outlets.
The Tech Justice Law Project (TJLP) maintains a regularly updated litigation tracker gathering tech-related cases in U.S., federal, state, and international courts. To help ensure TJLP’s tracker is as complete and up-to-date as possible, readers can use this form to suggest new cases or propose edits. TJLP also welcomes readers to provide additional information and suggestions for future roundups here. Send additional thoughts and suggestions to info@techjusticelaw.org.
Authors
