Beware of OpenAI's 'Grantwashing' on AI Harms
J. Nathan Matias, Avriel Epps / Dec 18, 2025
Sam Altman, cofounder and CEO of OpenAI, is pictured on September 25, 2025 in Berlin, Germany. (Photo by Florian Gaertner/Photothek via Getty Images)
This month, OpenAI announced "up to $2 million" in funding for research studies on AI safety and well-being. At its surface, this may seem generous, but following in the footsteps of other tech giants facing scrutiny over their products’ mental health impacts, it's nothing more than grantwashing.
This industry practice commits a pittance to research that is doomed to be ineffective due to information and resources that companies hold back. When grantwashing works, it compromises the search for answers. And that's an insult to anyone whose loved one’s death involved chatbots.
OpenAI's pledge came a week after the company's lawyers argued that the company isn't to blame in the death of a California teenager who ChatGPT encouraged to commit suicide. In the company's attempt to disclaim responsibility in court, they even requested a list of invitees to the teen's memorial and video footage of the service and the people there. In the last year, OpenAI and other generative AI companies have been accused of causing numerous deaths and psychotic breaks by encouraging people into suicide, feeding delusions, and giving them risky instructions.
As scientists who study developmental psychology and AI, we agree that society urgently needs better science on AI and mental health. The company has recruited a group of genuinely credible scientists to give them closed-door advice on the issue, like so many other companies accused of causing harm. But OpenAI's funding announcement reveals how small a fig leaf they think will persuade a credulous public.
Look at the size of the grants. High quality public health research on mental health harms requires a sequence of studies, large sample sizes, access to clinical patients, and an ethics safety net that supports people at risk. The median research project grant from the National Institutes of Mental Health in 2024 was $642,918. In contrast, OpenAI is offering a measly $5,000 to $100,000 to researchers studying AI and mental health, one sixth of a typical NIMH grant at best.
Despite the good ideas Open AI suggests, the company is holding back the resource that would contribute most to science on those questions: records about their systems and how people use their products. OpenAI's researchers have purportedly developed ways to identify users who potentially face mental health distress. A well-designed data access program would accelerate the search for answers while preserving privacy and protecting vulnerable users. European regulators are still deciding if OpenAI will face data access requirements under the Digital Services Act, but OpenAI doesn't have to wait for Europe.
We have seen this playbook before from other companies. In 2019, Meta announced a series of fifty thousand dollar grants to six scientists studying Instagram, safety, and well being. Even as the company touted its commitment to science on user well-being, Meta's leaders were pressuring internal researchers to "amend their research to limit Meta's potential liability," according to a recent ruling in the D.C. Superior Court.
Whether or not OpenAI leaders intend to muddy the waters of science, grantwashing hinders technology safety as one of us recently argued in Science. It adds uncertainty and debate in areas where companies want to avoid liability and that uncertainty gives the appearance of science. These underfunded studies inevitably produce inconclusive results, forcing other researchers to do more work to clean up the resulting misconceptions.
Grantwashing also benefits companies by undermining the credibility of scientists over the long term. In our own research projects, we found that grieving families who blame tech firms for their loved one's deaths understandably refuse to talk to any scientist who takes money from the same companies. If those scientists are ever called on by policymakers or courts to offer expert testimony, their integrity will inevitably be questioned if their work on mental health was funded by industry. For as little as $5,000 a scientist, that's a pretty good deal for tech firms.
Two decades of Big Tech funding for safety science has taught us that the grantwashing playbook works every time. Internally, corporate leaders pacify passionate employees with token actions that seem consequential. External scientists take the money, get inconclusive results, and lose public trust. Policymakers see what looks like responsible self regulation from a powerful industry and backpedal calls for change. And journalists quote the corporate lobbyist and move on until the next round of deaths creates another news cycle.
The problem is that we do desperately need better, faster science on technology safety. Companies are pushing out AI products to hundreds of millions of people with limited safety guardrails faster than safety science can match. One idea, proposed by Dr. Alondra Nelson, borrows from the Human Genome Project. In 1990, the project's leadership allocated 3-5% of its annual research budget to independent "ethical, legal, and social inquiry" about genomics. The result was a scientific endeavor that kept on top of emerging risks from genetics, at least at moments when projects had the freedom to challenge the genomics establishment.
If OpenAI and other AI firms committed 3-5% of their annual research budgets to genuinely-independent safety science, scientists could commit to finding real answers to questions about their products' greatest possible risks. In the case of mental health, hospitals and clinicians could update patient intakes to include AI use and enroll people in long-term studies. Scientists could design ethical, privacy-preserving programs to study public concerns. And the public could have independently-verifiable research on how AI helps or harms mental health, alongside evidence that companies are adopting best practices.
We can't say whether specific deaths were caused by ChatGPT or whether generative AI will cause a new wave of mental health crises. The science isn’t there yet. The legal cases are ongoing. But we can say that OpenAI's grantwashing is the perfect corporate action to make sure we don't find the answers for years.
Authors

