Skip to content

In Our Tech Reckoning, People with Disabilities are Demanding a Reckoning of their Own

Maria Town is the President and CEO of the American Association of People with Disabilities. Alexandra Reeve Givens is President and CEO of the Center for Democracy and Technology, and serves on the board of the Christopher & Dana Reeve Foundation.

In the current reckoning over technology’s role in society, one community is demanding a reckoning of their own after experiencing disparate harm caused by technology: people with disabilities. More than one in four Americans has a disability. Policymakers, advocates, and the tech companies themselves must consider and address how technology can negatively impact disabled people in order to create an affirmative, inclusive vision of technology that truly works for everybody. 

People with disabilities face severe barriers to employment, including inaccessible working conditions and insufficient housing or transportation that have resulted in an unemployment rate more than twice as high as non-disabled people. Unemployment rates are even higher for disabled people of color and other marginalized disabled people. The recent move to a more adaptable and remote work culture has provided some of the flexibility that disabled people have been calling for, but barriers to equal opportunity persist. 

Against this backdrop, the proliferation of automatic hiring tools that use tests and even videos of candidates to screen for personality and “fit” are systematically – and potentially illegally – excluding people with disabilities. Automated personality tests may unfairly screen out applicants with anxiety; AI-driven video assessments may penalize applicants with motor or cognitive disorders if they do not look directly at the camera or sit perfectly still.

Technology should create opportunity, not deny it. Along with other public interest groups, we recently called on the Equal Employment Opportunity Commission to fight discrimination in automated hiring tools. The Commission has since announced an initiative on artificial intelligence and algorithmic fairness, and the White House has referenced hiring tech as a priority in its work to create an “AI Bill of Rights.”

To build this future, disabled people must have a seat at the table, and all organizations developing technology must meet their legal and moral obligations to ensure their products are accessible to all, and audited to identify and stop any unfair impact.

AI-driven tools are built by fallible humans, meaning biases and errors can become baked-in to AI. This can have dire consequences in some settings, such as public benefits programs.  In state after state, disabled people have found their benefits significantly cut after states adopted new algorithmic systems–– often with little explanation and few opportunities to understand or challenge how decisions are made. A denial of these benefits can lead to food-insecurity, homelessness, and worse. Auditing of these tools is limited or nonexistent, leaving disabled people and our allies without tools to challenge these outright errors and disparate impacts.

These are just a few of the ways that AI-driven decision-making can harm people with disabilities. There are many others. In a new report, we identify content moderation approaches that make it harder for disabled people to earn money and express themselves. For example, Facebook’s automated advertising center repeatedly mislabeled and then removed advertisements for adaptive clothing brands for people with disabilities; meanwhile, TikTok admitted to suppressing content from people with disabilities in a misguided effort to reduce toxicity. (Both companies have since noted these errors and moved to change them). A lack of baseline privacy protections also means the connected devices disabled people rely on can put sensitive information at risk. And the proliferation of surveillance tools in schools risks exacerbating the school-to-prison pipeline that disproportionately impacts multiply-marginalized children, particularly disabled students of color.

It does not have to be this way. At their best, technologies developed by and for disabled people have opened doors and proven useful in a wide range of contexts. Audio books – pioneered by the American Foundation for the Blind – have made information far more widely available. Social media helps provide a platform for disabled people to organize and tell their stories on their own terms. Remote work has led to economic opportunity for disabled people, and in the future, autonomous vehicles may expand mobility tremendously. 

To build this future, disabled people must have a seat at the table, and all organizations developing technology must meet their legal and moral obligations to ensure their products are accessible to all, and audited to identify and stop any unfair impact. We have modeled this approach through efforts like AAPD’s Tech Forum, which brings a disability-led approach in convening disability organizations and industry groups to discuss technology, access, autonomous vehicles, and data privacy. We also need more representation in the development of these technologies, and efforts to audit bias must assess impacts on people with disabilities, just as they do based on gender and race.

In this crucial moment, we must prioritize technology development and policy that centers disabled people and their experiences – creating a better and just society for us all.