Home

Donate

Questioning OpenAI's Nonprofit Status

Justin Hendrix / Jan 14, 2024

When Sam Altman, the CEO of OpenAI introduced himself before the Senate Judiciary Subcommittee on Privacy, Technology and the Law last May, this is how he described the company’s governance:

OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks. We have to work together to manage. We're here because people love this technology. We think it can be a printing press moment. We have to work together to make it so. OpenAI is an unusual company, and we set it up that way because AI is an unusual technology. We are governed by a nonprofit, and our activities are driven by our mission and our charter, which commit us to working to ensure that the broad distribution of the benefits of AI and to maximizing the safety of AI systems.

But the recent high-profile corporate governance issues within OpenAI, including the dismissal and subsequent reinstatement of Sam Altman as CEO, have spurred many to question whether the company’s unique structure is appropriate, and whether it can remain true to its nonprofit mission given its massive for profit incentives. For instance, just this week, The Intercept reported that OpenAI “quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy,” and the European Commission indicated that it may investigate OpenAI's relationship with Microsoft.

Today’s guest is Robert Weissman, the president of the nonprofit consumer advocacy organization Public Citizen. He is the author of a letter addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary. It also raises broader issues about the future of AI and how it will be governed.

Below is a lightly edited transcript of the discussion.

Robert Weissman:

I am Robert Weissman, President of Public Citizen.

Justin Hendrix:

What does Public Citizen get up to? For any of my listeners who aren't aware of it.

Robert Weissman:

We do a lot of stuff. We focus on consumer protection, advancing democracy, among many other issues. We do stuff ranging from working on global trade deals to working for Medicare for all, trying to protect the democracy, which we do with you from time to time, and we're doing a lot on big tech.

Justin Hendrix:

Tell me a little bit more about your interests around tech and tech policy.

Robert Weissman:

The thing that connects the many different things we work on as a concern about too much corporate power and how concentrated corporate power adversely affects society in many different dimensions. Big tech companies now are the biggest companies we've got. They're exerting huge amounts of influence over us in many dimensions, including even what we think. And so we've been very focused on them for the last eight or 10 years, and with the increasing focus of these companies on artificial intelligence, we've turned our focus to that issue as well.

Justin Hendrix:

And that's what led you, on January 9th, to send a letter to the California Attorney General, which regards a certain firm OpenAI that we've discussed on this podcast many times, of course. And I suppose this was prompted by what you call sort of recent events, the drama over Sam Altman and whether he would remain CEO of the company.

Robert Weissman:

Yeah, there's a lot we don't know about that still, despite all the coverage. A lot of ways to look at it, but I think one way to look at it that is true, along with others, that there was a fight inside enterprise of OpenAI, which is a nonprofit that exerts the controlling over influence, supposedly, over a for-profit. There was a fight over this collective enterprise of OpenAI and for-profit forces defeated the nonprofit forces. That when the board of OpenAI said they were firing Sam Altman, that was the nonprofit board acting, but the for-profit forces, including the major investor of Microsoft, the employees who were trying to cash in on stock options and others overturned that decision effectively and forced off the board, the nonprofit board, the people who had previously voted to fire Sam Altman.

That turned out to be a problem because nonprofits aren't allowed to be for profit enterprises. They have to serve a nonprofit purpose, and if they're going to primarily pursue profit, then they need to forfeit the nonprofit designation and act as for-profit businesses. And we've asked the California Attorney General to look at what happened, look at the structure of this collective enterprise and see if in fact OpenAI should be stripped of its nonprofit status and treat it as a for-profit corporation.

Justin Hendrix:

So ultimately this is about whether the charitable purpose of the nonprofit can exist alongside essentially the commercial interests that OpenAI is now pursuing in conjunction with Microsoft. And struck by the fact that this week as well, we know that the European Union has announced that it's going to look into competition related issues with regard to open AI's relationship to Microsoft. I think I saw the inklings of something happening in the UK as well. To what extent is this stitch up with Microsoft a part of your thinking here? How does that sort of play into this?

Robert Weissman:

Well, the OpenAI structure is very unusual. This idea that you'd have a nonprofit that has a controlling interest over a for-profit. And then what makes that more complicated is that for-profit has outside investors, and then this major investor slash partnership with one of the biggest corporations in the world, Microsoft, it's just very unusual. Theoretically, the nonprofit could pursue its nonprofit purpose with that status. Nonprofits are allowed to have for-profit businesses. For example, a museum might have a profit seeking gift shop, that's fine. But the for-profit purpose can't overwhelm the nonprofit purpose. And that's what seems to be happening here.

The investigations in Europe about whether there's basically been a merger between Microsoft and OpenAI treats OpenAI as a for-profit business, because obviously Microsoft can't have an acquisition of a nonprofit. So that suggests that the regulators in Europe are coming to the view that the nonprofit is just pretend patina on top of the real operation of OpenAI, which is the for-profit business.

Justin Hendrix:

How does the valuation of the company, I think is now approaching some hundred billion dollars? How does that play into any calculation about the governance and the way that it operates?

Robert Weissman:

It's hard to say if the valuation itself affects that, although it becomes hard to imagine how a tiny little nonprofit is controlling a hundred billion dollar company. But where this does come into play very directly is if OpenAI is stripped of its nonprofit status, or if it chooses voluntarily to convert to for-profit status. The nonprofit as it retires or is dissolved, is required to pay out to other charitable purpose the value of its assets, which raises the question of what's the valuation of OpenAI, the nonprofit? That's not calculable from the outside. We don't know what share OpenAI, the nonprofit, owns of OpenAI, the for-profit. They've got a controlling interest, but it's almost for sure a very tiny share percentage. So it's an interesting exercise to figure out how you'd even think about this. We offer some views on that in our letter, but at the end of the day, I think OpenAI, the nonprofit is worth billions of dollars at minimum. If it were dissolved or if it chooses to convert to for-profit, it would have to put billions of dollars into some new charitable probably foundation to advance its purposes.

A historic precedent for this is when Blue Cross' at the time nonprofit health insurers converted into for-profit enterprises. California Blue Cross converted into what's now Anthem. They tried to put a small amount of money into a nonprofit purpose. The California Attorney General intervened and they ultimately paid out about $3 billion into ongoing significant health charitable foundations in California. That's a good model for what might happen here.

Justin Hendrix:

Tell me a little bit about what you hope that the Attorney General will do next, and they've received this letter, let's just say Attorney General Bonta says, "Great Public Citizen. Love this thinking. We're going to move ahead right now." Is there an event that would have to occur, this event of dissolution in order to kick off some activity by the Attorney General, or is there something that you think he can do now?

Robert Weissman:

No, he just needs to make the decision to investigate. There's information in the public domain. The information that we have, including the reporting about what happened during the Altman firing and rehiring tells a strong story that the for-profit interests have taken control. But there are things that we don't know. We don't really know what the new structure is as it was evolved or adjusted in the course of the town hall. We don't know exactly what the authority is of the nonprofit board. We don't know if new nonprofit board members are coming on as was promised. A lot of things that have remained murky. So the attorney general needs to investigate that and reach a conclusion based on the investigation. If his office decides, yeah, this appears no longer to be serving nonprofit purpose, then he would act to dissolve the company.

Justin Hendrix:

How does this effort around OpenAI fit into your overall efforts to hold tech companies to account? Are there particular concerns you have here specifically about OpenAI just given this very peculiar structure that generalized to your concerns about AI more generally?

Robert Weissman:

In a way, no, because this is such an unusual structure, an unusual history for how OpenAI developed, started really with a genuine nonprofit orientation. He's evolved into what it is now. That's a very specific. But I do think what this issue highlights is who's going to control the way this technology unfolds? Who's going to establish the rules by which AI can be released into the world? And if you've got a nonprofit, which was the original vision of open AI driving those decisions, it's not enough to say we don't need the government in charge, but it does suggest a particular orientation, and it was an orientation, a strong belief in the affirmative value of what AI could offer to the world, but also simultaneous real fear about the risks and a real sense about needing to be cautious and reflective and investing in safety and ethics.

When you switch to a for-profit orientation, you may still care in your heart about safety and ethics, but you're really saying, at the end of the day, what you care about most is making money, and that's going to lead you to make different kinds of decisions. So there's a way in which although we're dealing with a very specific and weird corporate structure, and that's sort of the predicate for our letter to the California Attorney General, the issue at play is really who's going to control this technology and how are decisions going to be made? And what we fear is, if these decisions rest primarily, and at the end of the day with gigantic for-profit corporations, we're in a whole world of trouble. If there's some public control over how the technology evolves, how it can be deployed, establishing protection and guardrails, maybe it can deliver on some of what its advocates promised that it will.

Justin Hendrix:

There's been a lot of discussion on Capitol Hill about how to regulate artificial intelligence. Of course, Senator Schumer just led a series of AI insight forums over the course of the fall. I noted that when he was asked about Sam Altman's firing and then return, he was essentially quite pleased to see that Sam had been returned. He sort of seemed to say that everyone in the world of AI or AI policy was breathing a sigh of relief that Altman was back in place. I also note how generally chummy, most lawmakers appear to be with Brad Smith of Microsoft, who's testified recently around artificial intelligence. These companies are on Capitol Hill, they're engaging with lawmakers every day. We know they're spending enormous sums on lobbying, and when they're in front of lawmakers, they're generally greeted as sort of heroes, or perhaps holders of the key to American competitiveness. I don't know, how do you sort of compete against all of that effort and the moneyed interests that are behind it?

Robert Weissman:

That's what we do every day. So we're used to that. Whether it's the tech companies or the drug companies or big oil or whoever. We're used to that dynamic. Each issue is specific and has its own peculiarities, and this certainly does. I do think in the case of regulating AI or developing AI policy, it's pretty difficult to do it without being in conversation with industry because there's not enough independent information. There's just not enough external understanding of what's going on. And it's also the case, and this is unusual, I think, there are a lot of people who have sort of joint appointments between universities and these companies, and so some of the critics and really thoughtful people actually are affiliated with the companies. That doesn't happen that I can think of in any other industry. So it's a little bit messier. I don't begrudge anybody talking to on Capitol Hill or policymaking circles talking to OpenAI, Microsoft, Google, or whoever. But I don't want them relying on them, and I don't want them to be the only ones in the room.

I think there's a spectrum. Some of these companies are being more cautious than others. Some are more open to external regulation than others. But the for-profit world is the for-profit world. And there's a way in which this is very clear at this point, Google particularly, Facebook to some extent, and Microsoft to some extent were holding back on the release of generative AI and high level chat programs, because they didn't feel they were developed enough. They didn't feel they were safe enough. They were worried about the hallucination slash lying problem. But once OpenAI launched ChatGPT, they felt competitive pressure that they couldn't wait. So much so that even the New York Times just did an end of the year review of what happened in the world of AI in 2023 and points out that as Microsoft was about to make a major announcement about how it was partnered with OpenAI and integrating ChatGPT in some of its tools, Google felt the rush to get out the day before. Literally the day before. Sort of childish, but that's what business does and business needs.

And the years of caution and reflection, holding back gave way immediately once ChatGPT had entered the market. That's an important story in its own right, and it's also an important reminder for everything going forward. No matter how well intentioned any of them are, the corporate systems structured, designed, incentivized in ways that do not reward caution and consideration of the public interest. It's become a trite now. The old Facebook model, “move fast and break things.” In a way, that was a specific Facebook idea, but one could also just say that's what corporations do. And when you have this technology with all of its uncertainties and potential severe risks, that's not okay because the thing you're breaking might be humanity.

Justin Hendrix:

I noticed that you talked to Gary Marcus, who's also appeared on this podcast before, of course, the NYU professor and cognitive scientist entrepreneur who's done a lot around AI. He seemed excited about the idea that maybe this dissolution might occur and there might be some nonprofit, essentially the beneficiary of distribution worth billions of dollars. Any ideas as to what that nonprofit should get up to, if in fact all of this were to come to pass?

Robert Weissman:

I do think that what happened with Blue Cross is a good model. If you really had billions of dollars, that's a lot of money. So put it into a foundation. It's in the business not pursuing a single program, but supporting a wide range of activities in artificial intelligence, safety and ethics. Maybe you would want to support some AI development in its own right. Maybe pursuing a different kind of path that's a little less where I would be inclined to go, but that would be reasonable. You'd be basically trying to carry forward the mission, the nonprofit stated mission of OpenAI in this subsequent entity. I think the idea of having a foundation which could support a lot of different activities is a pretty good model.

Justin Hendrix:

So perhaps returning OpenAI to its original purpose to in fact be open and to research on artificial intelligence in the public interests. We'll see what comes of it. Any response yet from the attorney general?

Robert Weissman:

No, and we'll see what happens. I think a lot of times when you ask enforcement agencies to take action, you ask and then you hope that it happens, and you don't expect to hear because for good reasons. Enforcement agencies generally don't talk publicly about what they're doing until they've done it. So we'll see.

Justin Hendrix:

Well for listeners who might like to read the specifics of Public Citizen's argument, the letter of course will be linked in the show notes. And Robert Weissman, I thank you so much for speaking to me today.

Robert Weissman:

Great to be with you. Thanks.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics