Home

Donate

To Regulate Artificial Intelligence Effectively, We Need to Confront Ableism

Ian Moura / Oct 16, 2024

Jamillah Knowles & Reset.Tech Australia / Better Images of AI / People on phones (portrait) / CC-BY 4.0

In all the discussion of how artificial intelligence will change society, a significant question is being missed: what does it mean when we are more willing to believe a computer can communicate than a person who does so atypically? While AI may hold great promise, we continue to dismiss human intellect that does not conform to our expectations of what intelligence looks like. Unless we confront the ableism in our collective understanding of intelligence and work with the disability community to shape AI, emerging technologies will create harm that more critical reflection could have prevented.

Since OpenAI’s release of ChatGPT in November of 2022, technologists, journalists, and the general public have been caught up in a wave of predictions and hype about what this new technology means for humanity. Media outlets have printed article after article declaring the profound effects of AI on society. Despite all this discussion, we have foregone any real reckoning with what intelligence is, and instead rushed to accept claims that machines could have it. As a result, we continue to talk about AI, including large language models like ChatGPT, without seriously grappling with the risks and realities. To truly invest in creating and using AI to benefit humanity, we need to fully include disabled people as experts in this work – because somehow, we are still less willing to accept disabled people’s intelligence and humanity than that of a machine.

There is a long history of equating communication with intelligence, often in ways that disadvantage marginalized groups. Dialects and slang, especially when racialized, are frequently described as “incorrect” language and assumed to mean that a person is less intelligent and less educated. Speech differences and impediments are treated as markers of intellect, or lack thereof, such as when a stutter is interpreted as a sign that someone’s thoughts are not fully formed. Misspellings, grammatical errors, and nonstandard writing, even when they result from a specific disability such as dyslexia, are often taken as proof of a less intelligent writer and used as a reason to disregard the ideas presented. Most egregiously, people who cannot speak are routinely assumed to have no thoughts to communicate, and people who use augmentative and alternative communication (AAC) rather than speech frequently have their communication treated with derision and suspicion.

Given this history, our willingness to accept ChatGPT’s production of writing that conforms to certain expectations as a sign of intelligence says far more about how we define intelligence than it does about technological progress. There is no single human capability that makes up intelligence, and the term has been understood in varying ways throughout history. However, the development of IQ tests in the late 19th and early 20th century promoted an understanding of intelligence as a measurable trait described by a numeric score on a standardized assessment. These tests, and their inventors, created a version of intelligence that was easy to measure and compare, but also distorted by racial, cultural, and socioeconomic biases – and which could be used to justify discrimination against certain groups of people whose scores marked them as “deficient.”

Like IQ tests, ChatGPT and other large language models emphasize the form of intelligence over its function. These AI tools appear smart because they are effective mimics of what we expect from “intelligent” communication. However, parroting content that matches our ideas about how “smart” people write is not the same as actual understanding based on deep engagement with ideas. In their attempts to produce intelligent writing, large language models consistently fabricate details and generate outright misinformation, demonstrating the risks of assuming a model is intelligent because of performance on a single constrained task, such as producing text.

What all this means is that not only are we failing to develop AI that is truly intelligent, we are also failing to learn from history. Time and again, disabled people have been excluded, oppressed, and erased, sometimes with the help of technologies that promise to “normalize” them. Yet disability is an integral part of humanity. Building a world that is more inclusive and advancing disabled people’s ability to exercise fundamental rights benefits society in ways that technology alone never will. But more than that, disabled people are uniquely able to explain the difference between being human and being seen as human – something profoundly relevant to AI. Much of the work we need to undertake to ensure that AI is safe and beneficial to all of society, not just to the few people with the power to shape its development and use, will require that we think critically about how and when we delegate responsibility for high-stakes decisions to automated processes, including AI systems.

To do that, we need to be realistic in describing how AI and other algorithmic technologies produce output that we accept as indicative of thoughtful work when it is done by humans. But sometimes, it also means stepping back and reexamining all the ways that we currently exclude certain humans from having a say in decisions. Creating AI that is safe, ethical, and beneficial to humanity necessitates reckoning with the arbitrary ways we’ve defined intelligence. Who better to lead the way than those who know first-hand the consequences of such definitions?

Use and regulation of AI is more than a technical problem needing technical solutions. These tools are created and used by humans, and they reflect human values and biases. More than that, they are too often evidence of whose perspectives we assume to be well-reasoned, and of the fact that we are sometimes quick to cede power to those whose communication is most polished, regardless of the substance of the ideas they are espousing. Addressing the harms that we are already seeing result from AI cannot be done through software and legal codes alone. In order to use algorithmic technologies responsibly and mitigate the negative impacts of AI-fueled misinformation, we must address the ableism embedded in our collective assumptions about what it means to be intelligent, communicative, and human.

Authors

Ian Moura
Ian Moura is a research assistant at the Lurie Institute for Disability Policy and a doctoral candidate in Social Policy at The Heller School for Social Policy and Management at Brandeis University, where his work focuses on the intersection of technology and disability policy. His research interest...

Topics