Home

To Make Good Policy on AI, Talk to Social Workers

Siva Mathiyazhagan, Shana Kleiner, Desmond Patton / Mar 16, 2021

The federal government has yet to establish a robust policy framework around the development and deployment of artificial intelligence (AI) technologies. We call for the attention of policymakers to partner with social workers to ensure safe, inclusive, and ethical technologies which cause no harm to marginalized people and promote diverse representation to amplify voices of vulnerable communities.

There is currently no federal legislation regulating companies creating AI and algorithmic tools, such as social media e-surveillance tools, facial recognition detection tools, or pretrial risk assessments. The only federal policy that discusses AI is an Executive Order on Maintaining American Leadership in AI, which was issued by the Trump White House in 2019. This order, however, did not regulate AI companies, but instead emphasized “removing barriers to AI innovation." The prioritizing of rapid development, rather than ethical development, does not take into account whether emerging AI technologies are not only effective but are also working to help rather than harm. Innovation within the AI field has bled into many other fields, from healthcare to incarceration, from manufacturing to education. Proposed legislation such as the Algorithmic Accountability Act, which outlines minimizing risk of automated systems, as well as the Deepfakes Accountability Act, which seeks to mitigate the spread of malicious political disinformation before an election, were introduced in Congress in 2019, with no further action as of November 2020.

The issue with limited regulation of the deployment of AI technologies is the fact that algorithms are often biased against people of color, particularly women of color (see: Coded Bias). For instance, pre-trial risk assessments are algorithms that help inform the sentencing and bail decisions of defendants. Although framed as unbiased and scientific, according to the Berkman Klein Center for Internet & Society at Harvard University, risk assessment algorithms rely on flawed and biased data, such as historical records of arrests, social media monitoring, charges, convictions, and sentences. Facial recognition technology, in general, has been proven to disadvantage Black people in particular. For example, an MIT study of three different gender-recognition systems found error rates of up to 34% for dark-skinned women — a rate nearly 49 times that for white men. Safiya Umoja Noble also discusses harm towards women of color in search engines: the fact that google searches for Black girls are highly sexualized and discriminatory, whereas for white girls, the google results look vastly different . Moreover, facial recognition technology also blends into the hyper-surveillance of Black people who are at a higher risk of incarceration. According to Ruha Benjamin in Race After Technology, "most EM (electronic monitoring) is being used in pretrial release programs for those who cannot afford bail, employing GPS to track individuals at all times – a newfangled form of incarceration before conviction."

Moreover, the Trump administration’s Executive Order on Maintaining American Leadership in Artificial Intelligence highlights creating more apprenticeships and programs for STEM fields, which would likely inadvertently have a detrimental effect on people of color. As the Pew Research Center points out, Black employees comprise only 9% of STEM workers, whereas Latinx people are only 7% of all STEM workers. With that being said, representation only scratches the surface of addressing the digital divide: Google recently fired Timnit Gebru, a Black researcher who was the co-lead of the company's Ethical Artificial Intelligence team. Gebru, along with other employees of color, state that Google fires those that speak up against systems of oppression within the company, and that the entire basis of her job termination was done swiftly due to her identity as a Black woman. This example shows that even with people of color in tech spaces, there is a need for a cultural shift: even with representation, there is still suppression.

Furthermore, a generally light approach to AI regulation may continue to negatively impact marginalized communities. As mentioned above, e-incarceration runs rampant through the use of pretrial risk assessments, as well as facial recognition technology, which research suggests results in increased investigation and arrest rates of Black people.

Looking Towards the Future:

Merging Social Work Practice with Data Science

Social workers can bring inclusive voices and practices that address the real world problems of marginalized communities. Social workers can also establish social cohesiveness in emerging technologies. Since some social workers are regularly in touch with marginalized communities, and continue to address behavioral challenges, social work skills can be leveraged in inclusive technology development, deployment, and ethical use of data. Through interdisciplinary collaboration between data science and social work, and human-centered design, social justice, and anti-oppressive perspectives can be brought to emerging technologies. Anti-oppressive social work practice aims to challenge the use of power and systems that harm marginalized groups.

Using this social justice lens, many social workers are currently and continuously responding to incarceration, bias, discrimination, and the "digital divide" to ensure that marginalized people have access to social equity and well-being. As Dr. Patton pointed out in a recent article surrounding social work thinking in AI, "Social work thinking underscores the importance of anticipating how technological solutions operate and activate in diverse communities." Social workers are necessary in tech development so that the technology we create is safe, inclusive, ethical, and in order to do that, diverse voices must be centered.

There are already a number of social workers forging the work of anti-racist tech practices, such as Dr. Courtney Cogburn, Jonathan B. Singer, Dr. Lauri Goldkind, Dr. Maria Rodriguez, and Dr. Melanie Sage. It is clear that technology has seeped into every profession, including social work, but it is important for policymakers to see the need for social work within tech, not just tech within social work. The job of social work itself only has a 3% risk of automation, which according to NPR, makes it the hardest job for robots to do. This is why social work's role in AI development is imperative: social work is not quantifiable, and with collaboration between emerging technologists and social workers, there will be a deeper understanding of justful human connection, and what it means to interface with technology.

Furthermore, this collaboration is already happening on a small scale: organizations such as All Tech Is Human are committed to building a future of ethical and responsible tech development, as well as highlighting interdisciplinary expertise. DataedX provides data equity training, while Parity AI offers data-driven assessments to increase fairness and transparency within AI. Equality Labs, on the other hand, is an organization that uses political organizing, community research, and digital security to fight oppression. In the same vein, Data 4 Black Lives is a movement of activists that aim to use data science to uplift and empower the Black community. In terms of the work currently happening to merge social work thinking in tech development, Thomas Smyth and Jill Dimond, two computer scientists, have framed anti-oppressive work in data science as "anti-oppressive design.” This design model highlights an inclusive and democratic workplace, stating that, "As with any product or project with a stated ethical foundation, the environment in which the product is created or the project is carried out is at least equal in importance to the end result."

We at SAFELab call for collective action to cultivate safe and inclusive technologies, including the following priorities:

  • Comprehensive federal legislation for safe and inclusive technologies: legislation that highlights the importance of ethical deployment, development, and usage of data and AI technology. The legislation would be informed by interdisciplinary networks, ethical frameworks, and human-centered approaches. The legislation would emphasize data autonomy and individual ownership of data, as well as privacy and consent.
  • Participation of vulnerable communities and social workers in emerging technologies: a collaborative relationship with the community is an imperative aspect of regulating tech. The role of social work would be to measure and mitigate the negative effects of current and new technologies. (i.e. cross-verifying data between community partners and social workers would lead to more context within decisions made by AI. Essentially, having checks and balances between real people and AI.)
  • Resource allocation to address the digital divide: We call for cultural diversity and inclusion in the technology. It requires pipeline human resources from marginalized communities. An increase of funds towards public programs promoting education in inclusive tech is necessary. For instance, a specific program teaching children of color tech education in a public school would be the ideal sense of resource allocation (i.e. cultivating tech/STEM skills in historically marginalized communities.)

Social workers have the potential to bring a strong sense of human-centered praxis, as well as anti-oppressive, practices into the world of tech development. The National Association of Social Workers Code of Ethics highlights social justice, as well as the importance of human relationships and the dignity of individuals. Approaching technology with a social work lens could lead to more ethical AI development, deployment, and the use of big data and technologies for social good.

Authors

Siva Mathiyazhagan
Dr. Siva Mathiyazhagan is the Associate Director of Strategies and Impact at the SAFELab, Columbia University. He strategies and facilitates local-global partnerships to scale SAFELab innovations and research projects to accelerate the impact with local communities and transnational tech-social work...
Shana Kleiner
Shana Kleiner is a current MSW candidate in advanced generalist practice and programming at Columbia University. She is a recent graduate from Skidmore College, where she earned her BSW, and is especially interested in restorative justice dialogue and community building. Shana has experience researc...
Desmond Patton
Associate Professor Desmond Upton Patton is a Public Interest Technologist who uses qualitative and computational data collection methods to examine the relationship between youth and gang violence and social media; how and why violence, grief, and identity are expressed on social media; and the rea...

Topics