Home

Donate

What Today’s AI Companies Can Learn from the Fall of Enron

Amy Winecoff / Mar 29, 2024

The demise of the energy and commodities firm Enron in 2001 is often framed as a cautionary tale about corporate greed and fundamentally unscrupulous actors. However, Enron’s metamorphosis from innovative Wall Street darling to disgraced corporate villain wasn’t entirely the work of a few bad apples committing fraud. Enron’s unraveling also resulted from a pathological organizational culture, which prized competition and risk-seeking at the expense of risk management.

Like Enron, today’s AI companies are similarly pushing the frontiers of innovation while investing substantially less in developing the organizational culture and infrastructure to manage risk. Unlike Enron, AI companies may not face legal consequences for the harmful impacts of their systems due to the current uneven AI regulatory landscape. At the same time, poor risk management and an unhealthy organizational culture can set AI companies on a path that damages their businesses and society. AI incidents can lead to public relations disasters, negative media attention, and user complaints, all of which can trigger greater scrutiny by policymakers, slow user adoption, and deter outside investment.

To avoid this fate, AI companies must avoid organizational pitfalls akin to those overlooked by Enron. Specifically, they must translate vague value statements into actionable plans, safeguard internal critics, bolster accountability measures, and prevent ideological monocultures from taking root.

Translate principles into practices.

Enron’s stated core values in a 1998 Annual Report included “respect,” “integrity,” “communication,” and “excellence,” values which were resoundingly not upheld within the organization’s practices. Likewise, over the past few years, many major tech companies have published responsible AI principles; however, if companies do not put these principles into practice, they will do little to prevent AI harm.

If companies say they value “fairness” or “alignment,” a necessary next step is to operationalize what it means for internal teams to develop fair or aligned AI. The National Institute of Standards and Technology (NIST) provides a high-level framework for how companies can begin identifying and mitigating AI risks. However, companies must still do substantial work translating NIST’s framework into their specific context and practices. The risks AI poses in healthcare (e.g., misdiagnosis) are very different from the risks posed by AI in social media (e.g., misinformation). Thus, the operational practices derived from companies’ AI principles must be responsive to the specific harms their products could cause. There is no one-size-fits-all solution.

Establishing a comprehensive risk management system can be overwhelming for companies thinking about responsible AI for the first time. But companies do not have to eat the elephant in a single sitting. Instead, they can embrace the idea of maturity models, a business strategy that helps organizations assess their current capabilities and define a roadmap for progress.

Within responsible AI, researchers and industry practitioners have suggested that the first maturity stage is ethical awareness. From awareness, companies can develop guidelines and initiate practices within individual teams, eventually scaffolding a robust risk management culture organization-wide. On the other hand, if companies define principles but never define progress milestones, they may find themselves in a position like Enron, where statements of ethical principles are merely empty rhetoric.

Protect critics and incentivize action.

Enron’s culture was unforgiving to naysayers, especially those who criticized Enron’s dubious financial practices. For example, employees who raised concerns about Enron’s fraudulent accounting either were met with disinterest or faced blowback from Enron’s leadership. Recent high-profile conflicts between AI ethics researchers and company leadership raise questions about whether AI companies are committed to safeguards for internal critics. However, AI companies have the opportunity to avert both public relations disasters and harmful AI incidents by actively protecting dissenting voices and internal critics.

In addition, companies must incentivize AI risk management work. If workers are evaluated and promoted based only on their contributions to core products, those who make progress on AI risk management are at a professional disadvantage. For example, although AI documentation is considered a cornerstone of sound, responsible AI practices, practitioners view the time required to create good documentation as an unrewarded effort. As a result, practitioners cut corners and produce incorrect or incomplete documentation that could increase the likelihood that their AI products cause harm. If companies want their employees to take AI risk management seriously, they must establish processes that consider these efforts in bonus, raise, and promotion decisions. Otherwise, the people closest to the problems are only likely to flag issues once they have reached critical levels when problems are most challenging to solve.

Hold yourself accountable.

Enron consistently evaded accountability for its shoddy business practices. Enron’s accounting issues should have been uncovered by their auditing firm, Arthur Andersen, to whom they paid a hefty $25 million in 2000 alone. Yet Enron manipulated its relationship with Andersen to ensure auditors told Enron what they wanted to hear. In return, Enron paid Arthur Andersen increasingly exorbitant fees for largely ceremonial financial audits. Enron and Arthur Andersen were so professionally and interpersonally enmeshed that workers celebrated in-office birthdays and vacationed together. These conditions did not allow Andersen to maintain the necessary distance to take a cold, critical look at Enron’s financial practices.

As comprehensive legislation for AI looms ever larger, the number of AI governance consulting and auditing firms has ballooned. AI companies may benefit from hiring third-party evaluators to vet their AI systems for risks; however, because there are not yet field-wide professional standards for AI audits, companies could also hire firms that do not engage in critical evaluations, but instead in “audit washing” exercises that fail to identify serious issues.

If AI companies do engage auditing and assessment firms, they should ensure that auditors maintain professional and personal distance from the workers whose technology they are assessing. AI companies should also provide evaluators with sufficient access to meaningfully evaluate their organizational risk management practices, their AI applications, and the underlying algorithms and datasets. Such policies can incentivize external consultants to find problems rather than ignore them. As with Enron, superficial engagements with external oversight partners leave AI companies and the public vulnerable to failures of AI systems.

Internal oversight also could have helped Enron prevent catastrophe. Since Enron's board was willing to approve obvious conflicts of interest, it is unsurprising that Enron had few internal guardrails. However, internal risk management teams are only effective if they can actually mitigate risks. Today, AI risk management teams often lack the authority to enact crucial changes, leaving responsible AI professionals to seek cooperation from receptive product teams rather than those most in need of their expertise. Instead, AI company leaders should delegate appropriate authority to internal AI risk management teams commensurate with the risks at hand.

If AI companies don’t shore up their oversight practices, like Enron, they may find themselves on the wrong side of enforcement agencies. In the words of the Federal Trade Commission (FTC), “Hold yourself accountable–or be ready for the FTC to do it for you.”

Avoid insular groupthink.

Enron executives were ideologically aligned in their embrace of a free market ideology. This ideology permeated every aspect of the company culture, from hiring and promotion practices to company retreats that involved high-risk sports. By design, Enron was a cultural echo chamber in which a survival-of-the-fittest mentality prevailed. This orthodoxy prevented Enron from recognizing its extreme vulnerabilities born from its dogged pursuit of high-risk, high-reward opportunities. If AI companies likewise employ an ideologically homogeneous workforce, they may also fail to perceive problems.

Recently, the media has made more than a few meals out of characterizing the AI community into cartoonish factions: the “accelerationists,” the “doomsayers,” and the “reformers,” rabidly infighting about the priorities the AI community should embrace. The cultural communities that have arisen around the societal concerns of AI can be insular. Still, there is no reason why companies should hire workers only from within their own social and professional networks. By employing workers whose perspectives do not conform to a single orthodoxy, AI companies can put themselves in a position to find solutions that cross “faction” boundaries and respect human rights. In fact, companies that engage heterogeneous viewpoints may arrive at better solutions.

Conclusion

There is no denying that AI has the potential to do great and terrible things because it already has. AI companies have publicly committed to protecting society from the harmful effects of their systems and to putting these systems to use in solving the world's most pressing challenges. However, public commitments and value statements alone will not ensure that the risks and benefits of AI are well balanced. Enron's collapse demonstrates that companies can undermine their values when they foster unhealthy organizational cultures that prioritize innovation at any expense. How AI companies establish robust organizational and technical risk management will determine whether they are remembered as visionaries or symbols of corporate recklessness like Enron.

Authors

Amy Winecoff
Amy Winecoff brings a diverse background to her work on AI governance, incorporating knowledge from technical and social science disciplines. She is a fellow in the AI Governance Lab at the Center for Democracy & Technology. Her work focuses on governance issues in AI, such as how documentation and ...

Topics