As Companies Monetize AI, Courts Will Weigh In
Madeline Batt / May 6, 2026Madeline Batt is the Legal Fellow for Tech Justice Law.
The Tech Litigation Roundup spotlights notable lawsuits and court decisions across a variety of tech-and-law issues.
A new lawsuit brings the question of AI monetization into the courthouse. The class action Doe v. Perplexity targets generative AI company Perplexity, along with Meta and Google, alleging they disclosed transcripts of users’ conversations with chatbots for targeted advertising. The case highlights a burgeoning monetization strategy for the AI industry to solve generative AI’s profitability problem with a function the technology has proven especially adept at: collecting intimate information about users. Coming a few months after announcements from Meta and OpenAI that they would use data from AI products to target ads, this class action provides an important window into how this form of monetization will be challenged in the courts.
How will AI turn a profit?
Even as generative AI has attracted massive investment from venture capitalists and resulted in multi-billion-dollar valuations for AI start-ups, AI companies continue to lack a profitable business model. Generative AI models are extremely expensive to build and operate, and most users do not pay to use AI products. Even those who pay for AI subscriptions are not paying enough to cover the immense costs. In combination with circular financing agreements and other warning signs, these dynamics are leading some experts to voice fears of an AI bubble. (Others think the industry is turning a corner.)
Against this backdrop, the internet’s leading business model, advertising, may present an attractive option. Online targeted advertising functions by tracking user behavior across websites and applications, using algorithms to infer interests and purchasing habits so advertisements can be tailored to the consumers most likely to act on them. Personal data is the engine of this system. With many consumers treating chatbots as confidantes (often with tragic results), AI companies are gaining access to exceptionally intimate data about consumers. Meta has already indicated that it is using this new data source to target ads, and the approach is predicted to spread.
The allegations against Perplexity, Google, and Meta
The named plaintiff in the lawsuit, identified as John Doe, alleges that he used Perplexity’s AI chatbot (described in the lawsuit as the company’s “AI Machine”) for legal and financial advice, not realizing that the personal financial information he shared with Perplexity was being disclosed to Google and Meta via tracking technologies such as Meta Pixel and Google DoubleClick. According to the complaint, the information that Perplexity shares with Meta and Google via these tracking technologies includes identifying information such as email and IP addresses, as well as complete transcripts of users’ messages to the chatbot. Allegedly, transcripts and identifying information are disclosed for guest users, users signed into accounts, and––most notably––users in Perplexity’s “Incognito” mode, which the complaint calls a “sham.”
John Doe alleges that he and other users received no warning that Perplexity was selling his messages to its chatbot via third-party tracking technologies. The complaint also states there is no way to navigate to any purported terms or privacy policy from Perplexity’s landing page. In Perplexity’s “Incognito Mode,” the platform allegedly informs users, “You’re incognito … Threads you create won’t save to your history and expire after 24 hours,” but does not disclose that users’ prompts and identifiers are still shared in real time with Google and Meta.
The complaint emphasizes that users tend to share highly personal information with chatbots, including information about their mental and physical health, sexual and romantic lives, and personal finances. It argues that this data may increase the value of chatbot data interactions for advertisers, providing even more intimate and detailed personal information for targeting advertising systems. Under this monetization model, a user who messages Perplexity’s chatbot about a condition such as an eating disorder could later see related advertisements on Instagram or other websites, including ads for treatment programs or weight-loss products.
A separate lawsuit filed this month by Tech Justice Law and the Consumer Federation of America (CFA) highlights another potential concern. That suit, CFA v. Meta, alleges that Meta permits and profits from scam advertising on its platforms by targeting the scams to likely victims in the same way that legitimate ads are targeted to consumers. According to the complaint, Meta’s algorithm directs scams to the users most vulnerable to them (such as users who have clicked on scam advertisements before).
Taken together with documented cases of users turning to chatbots during active mental health crises, the allegations in these lawsuits raise concerns about how sensitive data could be used. For example, the complaint in the Perplexity case alleges the company shares data in real time with Meta––raising the risk that scammers could identify consumers at moments when they are most vulnerable to victimization.
Litigating Perplexity’s monetization strategy
The class action lawsuit against Perplexity, Google, and Meta raises a number of claims against the Defendants’ alleged practices, providing a window into the legal hurdles that this form of AI monetization will face. Many of the claims focus on whether Perplexity obtained adequate user consent to share chatbot messages.
The first category of claims in Doe v. Perplexity involves users’ privacy rights. The lawsuit, filed in California, draws on the California Invasion of Privacy Act (CIPA), the right to privacy enshrined in the California state constitution, and federal statutes prohibiting wiretapping. These claims focus on consumers’ right to ensure that their communications are not intercepted without consent, analogizing adtech trackers like Meta Pixel and Google DoubleClick to traditional recording devices. Past CIPA litigation relying on this adtech analogy has achieved mixed results, but CIPA suits based on AI customer service chatbots recording customer conversations have seen some initial success.
If sharing transcripts of chatbot interactions to ad trackers is covered by CIPA, the potential liability to Perplexity, Meta, and Google could be significant. The law provides for statutory damages of $5,000 per violation. It could also create a heightened obligation for AI companies seeking to use chatbot data for targeted advertising, as CIPA requires affirmative, informed agreement that disclosures and potentially even clicking “I agree” on a generic privacy policy would not satisfy.
The suit also invokes consumer protection law. Doe alleges that the Defendants violated California’s Unfair Competition Law, which prohibits unlawful and unfair business practices. Doe cites examples of unlawful business activity within Perplexity’s monetization model, such as “representing that…services have characteristics, uses, or benefits that they do not have in violation of [California] Civil Code § 1770,” and argues that the Defendants acted unfairly by profiting off of personal information that consumers did not know was being collected from them.
Recent cases suggest that consumer protection law can be used to target the disparities between tech companies’ representations about their products and how those products actually work. For example, the recent $375 million judgment against Meta in New Mexico relied not on showing that the company violated the law by endangering child users, but by demonstrating that Meta publicly misrepresented whether its platforms were safe for children. Doe v. Perplexity is not the only suit applying this approach to monetization practices. In CFA v. Meta, discussed above, plaintiffs rely on consumer protection law for the District of Columbia to challenge alleged discrepancies between Meta’s representations that it does not permit scams on its platforms and practices described in the complaint, including claims that the company is charging likely scammers extra to advertise to Meta users rather than banning them.
Finally, Doe brings claims of deceit, negligence, and unjust enrichment. Such general statutory and common law claims are often included alongside privacy and consumer protection causes of action. In this case, the allegation that Perplexity shares data even when users opt for Incognito mode––which was highlighted by media after the lawsuit was filed––provides additional basis for the deceit claim.
Because many of these claims center on consent and potential misrepresentation, most would not be a barrier to AI monetization if companies were transparent about how data, including chatbot transcripts, is used and shared. All of the claims could be surmounted by affirmatively seeking informed consent.
Even so, these legal theories may still shape how companies approach AI monetization. Concerns about discouraging users from sharing valuable, highly intimate personal data may incentivize AI companies to test the limits of what consumer consent is legally required. And as a further wrinkle, the probabilistic nature of AI chatbots creates a risk of unintended outputs that may falsely assure consumers that everything they write will be kept confidential. At this point, the legal implications for companies that act against confidentiality assurances provided by a chatbot remain uncertain, though it relates to broader questions of how courts will treat “admissions” by chatbot products that litigants seek to attribute to companies.
As AI companies search for ways to make their products profitable, Doe v. Perplexity appears to vindicate predictions that chatbots would adopt an ad-based monetization playbook––but not without legal pushback. Where courts ultimately draw the line will have significant implications for both the industry and consumers.
Other tech litigation developments:
- Musk v. Altman jury trial: A trial is underway in a high-profile lawsuit in which Elon Musk alleges that his former co-founders and now competitors at OpenAI deviated from the organization’s purported founding mission to ensure artificial general intelligence benefits all of humanity. Among other remedies, Musk is seeking to remove Sam Altman from OpenAI’s leadership and board and to force the company to restructure as a non-profit.
- Tech companies allegedly enabling human rights abuses: Oral argument was heard at the US Supreme Court in a case against Cisco for allegedly designing specialized surveillance technology to help the Chinese government find and persecute a religious minority. The Court is expected to reverse the Ninth Circuit’s decision that let a lawsuit under the Alien Tort Statute and Torture Victim Protection Act to move forward against the company. Meanwhile, the telecoms company Telenor was sued in Norway for allegedly sharing sensitive data about dissidents with the military junta in Myanmar, contributing to their persecution and torture. In the US, a judge held that the removal of ICE-watch apps and groups by Apple and Facebook was likely a government-coerced action violating the First Amendment.
- Section 230 decisions: A lawsuit brought by the Massachusetts Attorney General alleging youth social media addiction will proceed against Meta, after the Massachusetts Supreme Judicial Court concluded that Section 230 of the Communications Decency Act did not immunize the company against the state’s design-based claims. Meanwhile, a Ninth Circuit suit against Meta for Facebook’s alleged role in fueling anti-Rohingya violence in Myanmar was dismissed on Section 230 grounds, but all three panel judges expressed that Circuit precedent had unduly expanded the law’s protections. Two judges called for reconsideration en banc so that Section 230 interpretation could be brought back within the scope of the law’s intended meaning.
- NAACP sues Musk over data center: The NAACP has sued Elon Musk’s company xAI, alleging that the company is illegally operating unpermitted methane gas turbines to power its data center in violation of the Clean Air Act, spewing toxic pollutants into historically Black residential neighborhoods.
- Chinese workers’ rights against automation: The Intermediate People’s Court of Hangzhou in China ruled that firing a worker because AI can do their job more cheaply is illegal.
- DOJ intervenes in challenge to Colorado AI safety law: The Department of Justice has intervened in Elon Musk’s legal challenge to Colorado’s AI Act, arguing that the law’s anti-discrimination provisions violate the Equal Protection Clause. The filing signals that the Trump Administration is prepared to litigate to further its opposition to state-level AI regulation and that lawmakers should be ready to defend AI bias protections under a “color-blind” legal regime that makes addressing existing racial inequality a constitutional minefield.
- Mass casualty and stalking chatbot cases: Families of victims of the Tumbler Ridge shooting sued OpenAI after the company flagged and deactivated the shooter’s ChatGPT account for violent content without informing authorities. The company is also facing a criminal probe and an expected civil lawsuit associated with the mass shooting at Florida State University. Separately, a woman who alleges her stalker was fueled by ChatGPT has sued OpenAI.
- Mercor AI sued: Mercor AI, a start-up that provides data and training for leading AI models, has been hit with multiple lawsuits after a third-party data breach. The lawsuits following the breach have exposed Mercor to allegations that it uses proprietary information from its contractors’ work with other companies to provide training data for its clients’ AI models.
Authors

