January 2025 Tech Litigation Roundup
Melodi Dinçer / Feb 6, 2025Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.

The US Supreme Court. Shutterstock
Welcome to the first installation of the Tech Justice Law Project’s (TJLP) Tech Litigation Roundup!
Each month, the roundup will highlight key developments in tech litigation, focusing on US state and federal courts, along with occasional coverage of relevant international cases. The resource is meant to help readers stay informed about ongoing cases involving AI, data privacy, tech antitrust, consumer protection, and more. The roundup may also spotlight emerging developments and explore broader trends in tech litigation. Future editions will offer opportunities for readers to delve deeper into specific cases or tech legal issues through educational webinars, podcasts, and related programming.
TJLP also maintains a regularly updated litigation tracker gathering tech-related cases in US federal, state, and international courts. To help ensure TJLP’s tracker is as complete and up-to-date as possible, readers can use this form to suggest new cases or propose edits. TJLP also welcomes readers to provide additional information and suggestions for future roundups here. Send additional thoughts and suggestions to info@techjusticelaw.org.
January’s Legal Landscape: Tech, Courts, and Policy Changes
The start of the year ushered in not only a new administration but also a White House swimming with tech industry influence. The President’s cozy relationships with Big Tech leaders could complicate federal regulatory efforts just as many agencies and departments had ramped up enforcement and policy changes to address tech’s harmful practices and market power. The political transition may also embolden tech industry groups as they continue to fight against state-level regulations concerning data collection, content moderation, and product design choices, especially through the courts. This month also brought new developments in cases challenging generative AI-powered chatbots, an emerging area of both legal and public concern.
Notable DOJ and FTC Actions Pre-Inauguration
In the months and weeks leading up to the Inauguration, the US Department of Justice (DOJ) expanded its lawsuit against RealPage, a company selling software that landlords use to algorithmically fix rental prices. The DOJ sued six of the largest corporate landlords in the US for their use of RealPage to coordinate high rents, affecting over 1.3 million units in 43 states and DC.
Meanwhile, the Consumer Financial Protection Bureau (CFPB) placed Google Payment under supervision pursuant to its recent rule to treat the largest, unregulated fintech products similarly to regulated bank products. In response, Google sued the CFPB, claiming that CFPB supervision of its payment division would be a “burdensome form of regulation.” Then, tech industry groups NetChoice and TechNet sued, arguing the CFPB had failed to show sufficient consumer risks posed by covered payment apps like Apple Pay, Google Wallet, PayPal, Venmo, and CashApp, which process over 13 billion transactions yearly. The CFPB also released a report in January entitled “Strengthening State-Level Consumer Protections,” perhaps anticipating existential changes to the agency’s work reminiscent of Trump's first term and CFPB Director Rohit Chopra’s impending removal.
The Federal Trade Commission (FTC) also took action against a range of harmful tech practices, from banning fake reviews, including those generated by AI, to suing John Deere over illegal restrictions on farmers’ right to repair their equipment and prohibiting data brokers from selling sensitive location data—marking the first time the Agency found that selling such data without consumers’ awareness or consent was an unfair practice under the law The FTC also took action against AI-powered chatbots (described more below). With a new Chairman, however, it is unclear if the FTC will maintain its enforcement activities or change course. Such a shift would be significant as the FTC has become a de facto data privacy enforcer in the US, where there are no comprehensive federal data privacy rights nor a federal data privacy agency.
The Supreme Court Upholds the “TikTok Ban”
Speaking of data privacy, just days before the Inauguration, the Supreme Court issued a controversial decision upholding the TikTok Ban (or the Protecting Americans from Foreign Adversary Controlled Applications Act). The law requires that Chinese company ByteDance—which owns TikTok—cease operations in the US unless it sells the platform to a new owner before the law’s deadline, which Trump extended on his first day in office.
In its reasoning, the Court fretted over TikTok’s data practices and its potential utility to the Chinese government as a surveillance apparatus. In a rare moment of insight into today’s mass data-fueled tech ecosystem, the Court highlighted the “vast swaths of sensitive data” TikTok collects on US-based users to justify upholding the Act, all while sidestepping the tricky First Amendment issues posed by the law. In his concurrence, Justice Gorsuch noted further how the app mines data not only from TikTok users but also from “millions of others who do not consent to share their information” by accessing “any data” stored in users’ contact lists. While TikTok’s data practices are by no means unique, the Court explained that this decision was “narrowly focused” on TikTok alone and did not extend to other technologies. The Court’s reluctance to take up the First Amendment analysis also jeopardizes the rights of hundreds of millions of people in the US to access ideas and information, as highlighted by the Knight First Amendment Institute’s amicus brief.
Relatedly, the Electronic Privacy Information Center (EPIC) and Irish Council for Civil Liberties (ICCL) Enforce filed a complaint before the FTC this month claiming Google’s real-time bidding system for placing online ads collects troves of sensitive data about US individuals and makes this data accessible to foreign adversary countries, including China. The complaint is the first legal action brought under the Protecting Americans’ Data from Foreign Adversaries Act. In the complaint, EPIC and ICCL Enforce’s claims largely echo the Supreme Court’s data-mining concerns in its TikTok decision.
Generative AI-Powered Chatbots Challenged in State and Federal Venues
Last fall, the Social Media Victims Law Center (SMVLC) and Tech Justice Law Project (TJLP) brought their first lawsuit against Character.AI in a Florida federal court, alleging that the company’s generative AI chatbot product caused serious harms to children through its deceptive and addictive product design. Then, in December, Texas Attorney General Ken Paxton launched an investigation into Character.AI, Reddit, Instagram, and other tech firms over a bevy of alleged violations of two Texas laws concerning online children's privacy and safety.
Also in December, the SVMLC and TJLP brought another lawsuit in a Texas federal court raising similar allegations. Also in January, defendants in the Florida case, including Google, Character.AI, and its co-founders and former Google employees Noam Shazeer and Daniel de Freitas, filed motions to dismiss the case. The Character.AI motion raises several arguments, chief among them that the lawsuit infringes on the First Amendment rights of its users and that the chatbot product is not a “product” under the defective design claims. Notably missing from the motion is an argument under Section 230, which grants internet platforms legal immunity from harms arising from content developed by third-party users but should not apply to AI chatbots that create outputs based on tech companies’ AI models.
On the heels of all this action, the FTC referred a complaint to the DOJ against Snap, Inc. concerning Snapchat’s generative AI chatbot, My AI. Although the complaint has not been made public per the agency’s policy, the FTC announced its existence “in the public interest.” The complaint followed an FTC investigation reviewing Snap’s compliance with a 2014 settlement over violations of the FTC Act and revealed additional potential violations of the law. By specifying that the complaint related to the app’s chatbot feature, the FTC implied that the chatbot may be considered an unfair or deceptive practice under the Act. This view aligns with the Character.AI litigation, which claims that, among other things, the company’s chatbots constitute unfair and deceptive business practices.
To cap off the month, the Young People’s Alliance, Encode, and TJLP also filed a complaint before the FTC, arguing that another popular chatbot company, Replika, violated the FTC Act by using deceptive ads and design choices to bring young users to the app and increase their time (and money) spent there. As the Replika complaint highlights, Chatbots pose significant emotional harms to users, especially young people. However, the FTC’s stance on the complaint remains uncertain under its new leadership. The new Chairman has previously argued that emotional harm should not be a consideration under the Agency’s regulations, raising questions about how the agency will approach consumer harms from chatbots and harms resulting from other tech platforms.
Industry Groups Seek to Stop California Tech Regulation in its Tracks
Despite some recent setbacks, California continues to shape tech regulation across the country, setting standards that other states tend to follow. However, as California seeks to expand its tech accountability laws, the industry fights back, often in the form of protracted litigation, challenging the entirety of these laws as violating tech companies’ First Amendment rights (frequently relying on the argument that corporations are people, too).
Just last year, though, the Supreme Court expressed concern about legal efforts to completely overturn laws on constitutional grounds, or so-called facial challenges, because they “short circuit the democratic process” by preventing enacted laws from going into effect. This is especially so when the challenge is brought before a law is enforced, meaning the challenger must convince the court that there are significant, hypothetical examples of how the law would violate their First Amendment rights if put in action. According to the Supreme Court, these examples must be specific so a court can weigh potentially unconstitutional instances against potentially constitutional ones and see which way the scales tip. Whether or not tech industry challengers ultimately win, this tactic often spares tech companies from complying with these laws until the courts resolve their cases, sometimes several years later.
At least one such facial challenge is nearing the finish line, however. This month, X Corp. and California Attorney General (AG) Rob Bonta filed a joint report asking a district court judge for more time to continue their settlement negotiations. The judge has given them until February 24th to come to an agreement. In this case, X had sued to stop the AG from enforcing California’s AB 587, arguing that the law violated the First Amendment (among other things) by requiring large social media companies—those that generate $100 million or more in gross revenue per year—to disclose their content moderation and hate speech policies in a report submitted to the state twice a year.
X initially sought an injunction barring enforcement of the law. The trial court denied X’s request, and X appealed to the Ninth Circuit. Last September, the Ninth Circuit granted X an injunction on First Amendment grounds, ruling that the reporting requirement forced X and other regulated companies to produce constitutionally-protected speech, and the government could not justify doing so under the most stringent legal standard (called “strict scrutiny”). The next month could reveal the extent to which the courts will shield social media’s content moderation policies from public scrutiny.
Meanwhile, two other lawsuits facially challenging California laws also continue.
First, in NetChoice v. Bonta, tech industry group NetChoice challenged California’s SB 976 or “Protecting Our Kids from Social Media Addiction Act,” which prevents websites from using certain addictive design features on minors without parental consent to reduce the likelihood that they will develop online dependencies. On New Year's Eve, a federal judge ruled that while some provisions of the law may violate the First Amendment (and enjoined those provisions), NetChoice did not show that the law facially violated the First Amendment.
In the decision, the district court judge rejected NetChoice’s argument that requiring age assurance processes to run in the background of a regulated site burdened adult users’ access to speech. The judge also rejected NetChoice’s argument that personalized social media feeds are always expressive, like a newspaper editor’s decisions of what to publish, highlighting the industry’s reliance on algorithms to automatically push content on feeds, among other things. NetChoice has since appealed the decision, and the law remains enjoined while the Ninth Circuit considers an expedited appeal.
In another case involving NetChoice, the industry group challenged California’s Age Appropriate Design Code (AB 2273) (CAADC), which includes several requirements to promote children’s online privacy. NetChoice facially challenged the CAADC, and a trial judge agreed, enjoining the law. On appeal last fall, the Ninth Circuit largely overturned the injunction, finding that NetChoice had not provided enough examples of the law’s unconstitutional effects to justify invalidating the entire law, particularly features of the law that regulated data collection and design features.
With the case back before the trial court, the Electronic Privacy Information Center (EPIC) filed an amicus brief covering the law’s data provisions, while the Tech Justice Law Project (TJLP) filed one covering the law’s treatment of deceptive design choices. The judge heard oral arguments on January 23; she also requested briefings on a recent opinion from the Ninth Circuit in Arizona Attorneys for Criminal Justice v. Mayes, in which the Court ruled that under a facial challenge, the challenger must show that a substantial number of the law’s hypothetical applications to its business are unconstitutional and that those unconstitutional examples are vital parts of the law, meaning they are more than tangential to the law’s primary purpose.
These two cases are important not only to California but will have ripple effects far beyond the Golden State, including New York, which has enacted similar laws.
Other Developments
Continuing the theme of facial First Amendment challenges, the Supreme Court heard oral arguments in Free Speech Coalition v. Paxton, a case challenging a Texas law requiring businesses publishing sexual material online to use age verification methods to ensure minors under 18 years of age are not receiving them. The main issue on appeal is whether the Fifth Circuit court should have applied strict scrutiny, which is a more stringent test of the law’s constitutionality under the First Amendment than the test the appeals court applied. If the Supreme Court agrees with the Fifth Circuit’s approach, Texas’s law will remain in effect. As often happens in online speech cases, the courts relied on decades-old precedents and analogized age restrictions on brick-and-mortar establishments distributing sexual materials to websites doing the same without attending to the many ways that the virtual world functions differently.
Looking Ahead This Year
In the coming months, keep an eye out for several key developments in tech litigation. A potential settlement proposal in X Corp. v. Bonta could determine the future of California’s transparency law for social media platforms. We may also see a ruling in the Google ad tech antitrust case this month or next. In Google’s other antitrust case, the judge will determine remedies in the Google search antitrust case, in which Apple recently filed a motion to halt proceedings while it tries to represent its own interests in the outcome. Google is also seeking to overturn an order requiring the company to restore competition for rival apps in its Play store, among other reforms, through an appeal to the Ninth Circuit.
If you have made it this far, thank you. Please consider filling out this quick survey so future roundups can be as relevant to your work and interests as possible. You can also reach out to info@techjusticelaw.org.
Authors
