Home

Donate

Democratic AI Demands Good Policy and Ethical Development

Jeff Kleck / Nov 20, 2024

Dr. Jeff Kleck is a Silicon Valley entrepreneur, an Adjunct Professor at Stanford University, and the Dean of Academics at Catholic Institute of Technology.

Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

Many, including OpenAI co-founder and CEO Sam Altman, have made the case for an ethical, democratic vision for artificial intelligence. But to make democratic AI a reality, the world needs more than promises from tech leaders. It needs appropriate regulations and a proper approach to ethics policy to ensure AI is developed and deployed by ethical practitioners.

On the policy front, policymakers around the world are pursuing ethical AI development through very diverse approaches.

The American method is somewhat haphazard when considered as a whole. The Biden administration, for its part, has offered recommendations and policy guidance to promote ethical AI, including by releasing a blueprint for an “AI Bill of Rights” in October 2022, followed by further policy guidance for responsible AI development in May 2023 and a landmark Executive Order in October 2023. However, the executive guidance remains very high level, and much of it lacks the force of law. Developers and users can follow or ignore many aspects of the guidance at will.

Meanwhile, the US Congress has not passed substantive AI legislation. Those pieces of AI legislation Congress is considering are piecemeal and don’t offer a holistic ethical-regulatory framework. Instead, they deal with discreet issues like how AI influences election integrity or public health. There appears to be little chance that any comprehensive AI regulation will advance in both chambers in the near term.

The de facto result of the US approach is that ethical questions will be answered much more by private developers and users than by regulators or lawmakers. By choosing not to regulate AI, the US is embracing greater ethical uncertainty for the possibility of greater innovation.

The European Union, on the other hand, has enacted the AI Act, which regulates AI according to an ethics-based sliding scale of risk. AI innovations deemed less risky will face less regulatory scrutiny. Riskier systems will face more limitations, such as being required to register with the EU and to face an assessment before being delivered to market. AI systems deemed to have “unacceptable risk”—such as those designed to manipulate people or those that impose a social scoring system based on socioeconomic, racial, or other factors—will be banned.

With this approach, European policymakers are implicitly betting that there are certain uses of AI that all people—or at least the vast majority of people—will find unethical and that thus should not even be considered or attempted.

Despite Europe’s attempt at moral clarity, months later, stakeholders continue to haggle over the language of the law’s final codes of practice, especially as Tech giants like Amazon, Google, and Meta lobby for a lighter touch approach so as to not unduly hamper innovation. Ultimately, reasonable people will disagree on what counts as “high risk” and “unacceptable risk,” no matter how well-intentioned the laws are.

Despite very different approaches, the US and Europe reveal underlying truths in the pursuit of ethical AI: policy is necessary, policy can help, but policy is also insufficient.

Enter Ethics

To achieve democratic AI we must also more consciously shape how it is developed, not only how it is governed. But to do that, we need ethical developers. To understand why, it’s necessary to know that AI is unique as a technology in that it reflects the ethical posture of those who develop it. Like a person, AI systems build upon the ethical assumptions of the people who raise them to eventually make their own reasoned judgments.

Right now, AI is in its infancy. As every parent knows, kids often learn habits and behavioral principles from their parents in their earliest years. Good parents more often produce successful kids; bad parents more often lead to the opposite. The same principle is at work in artificial intelligence.

Who shapes AI now will determine what AI becomes, whether it’s a scourge of humanity, our defender, or some yet-to-be-determined mixture of both.

Let’s take an example. Many have expressed outrage when AI shows racial bias, from facial recognition struggling to identify certain races to hiring algorithms elevating applicants from one background over another. How does one correct this issue? There are a variety of methods, from altering the algorithm to manually limiting certain types of responses AI will give to changing the data that is inputted or that the AI system feeds into itself.

We can debate what tool is best used to correct this problem. But in the end, no matter which strategy is used, someone will have to make an ethical decision whether the goal is AI based on color blindness or anti-racism. That question is not technical, it’s moral.

Or take a hypothetical. Imagine AI is integrated into military targeting systems. Does the AI recommend firing the missile if 10% of the casualties will be civilians? If a single possible casualty is civilian? What if we discover that AI is even more accurate at avoiding civilian deaths than human operators? Would it then be morally preferable to replace human analysts in targeting systems with AI? These questions are not merely hypothetical; AI targeting systems are currently deployed in conflicts in Ukraine and Gaza.

Ultimately, these types of questions are endless; and they often aren’t cut-and-dry. There is a reason why people continue to fiercely debate how to achieve racial justice or if dropping the atomic bombs on Hiroshima and Nagasaki was justified. No computer, no matter how intelligent, can simply process all the data and tell us the right thing to do. No lawmaker, no matter how altruistic, can create a rule to govern every situation. Even universal rules must be applied with the human art of wisdom.

Clearly, it matters that those shaping AI can judge right and wrong in the first place. Unfortunately, people aren’t born moral. Call it innate selfishness, cultural prejudice, privilege, or original sin, but people must learn to be moral—and to do that, they must be taught.

We recognize this need in other fields. Over the years, graduate programs have been created offering ethics in science, medicine, and law. Practitioners understood that their fields could only be applied morally if they trained students to address the challenges they would face through a moral lens. AI is no different, yet to date, there is no program or institution devoted to the ethical training of future AI engineers or regulators.

This is beginning to change. An institution I am a part of, the Catholic Institute of Technology, plans to launch a Master of Science in Technology Ethics in the fall of 2025. We hope other universities will follow our lead. When policymakers are unable—or unwilling—to shape ethical AI, educational institutions must fill the gap to ensure AI is developed properly whenever the law is silent. Regardless, CatholicTech plans to offer ethics courses in person and online to as many future scientists and innovators as possible to fill the ranks of industry with people capable of moral decision-making.

Undoubtedly, those of us focused on AI will continue to fight over who gets to raise AI from its infancy to adulthood and what rules we should impose. Those are worthwhile debates. But if we really want AI to be democratic and good, we should also focus on teaching good people.

Authors

Jeff Kleck
Dr. Jeff Kleck is a Silicon Valley entrepreneur, an Adjunct Professor at Stanford University, and the Dean of Academics at Catholic Institute of Technology.

Topics