Home

Artifice and Intelligence

Emily Tucker / Mar 17, 2022

Emily Tucker is the Executive Director of the Center on Privacy & Technology at Georgetown Law..

From the essay Why I Stopped Hating Shakespeare, by James Baldwin

Words matter.

Starting today, the Privacy Center will stop using the terms “artificial intelligence,” “AI,” and “machine learning” in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities.

I will try to explain what is at stake for us in this decision with reference to Alan Turing’s foundational paper, Computing Machinery and Intelligence, which is of course most famous for its description of what the paper itself calls “the imitation game,” but what has come to be known popularly as “the Turing test.” The imitation game involves two people (one of whom takes the role of the “interrogator”) and a computer. The object is for the interrogator, physically separated from the other player and the computer, to try to discern through a series of questions which of the responses to those questions is produced by the other human and which by the computer. Turing projects that:

…in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70 percent, chance of making the right identification after five minutes of questioning.

Most of the scientific, political, and creative uptake of the imitation game engages with it (critically or defensively) as a test for machine intelligence. But, however provocative the question of the imitation game’s sufficiency as such a test may be, it is not the question that Turing’s paper actually claims to answer. In fact, in the first paragraphs of the paper, he explains that he is replacing the question “can machines think?” with the question of whether a human can mistake a computer for another human. He does not offer the latter question in the spirit of a helpful heuristic for the former question; he does not say that he thinks these two questions are versions of one another. Rather, he expresses the belief that the question “can machines think?” has no value, and appears to hope affirmatively for a near future in which it is in fact very difficult if not impossible for human beings to ask themselves the question at all:

The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

While there is no indication that we are close to realizing Turing’s narrower prediction of a computer that human beings reliably mistake for a human being, I am concerned that Turing’s larger prediction has nevertheless been fulfilled. It is now quite usual, across all of journalism as well as popular and scholarly literature, to read of machines not only “thinking” but engaging in a wide range of contemplative and deliberative activities, such as “judging,” “predicting,” “interpreting,” “deciding,” “recognizing,” and of course “learning.” The terms “artificial intelligence,” “AI” and “machine learning,” placehold everywhere for the scrupulous descriptions that would make the technologies they refer to transparent for the average person.

Our lack of self-consciousness in using, or consuming, language that takes machine intelligence for granted is not something that we have co-evolved in response to actual advances in computational sophistication of the kind that Turing, and others, anticipated. Rather it is something to which we have been compelled in large part through the marketing campaigns, and market control, of tech companies selling computing products whose novelty lies not in any kind of scientific discovery, but in the application of turbocharged processing power to the massive datasets that a yawning governance vacuum has allowed corporations to generate and/or extract. This is the kind of technology now sold under the umbrella term “artificial intelligence.”

Meredith Whittaker describes some of this history in a recent article comparing the tech industry’s capture of computer science research to the U.S. military’s commandeering of scientific research and research institutions during the Cold War. She identifies the 2012 debut of an algorithm called AlexNet as a turning point:

AlexNet mapped a path forward for large tech companies seeking to cement and expand their power…Tech companies quickly (re)branded machine learning and other data-dependent approaches as AI, framing them as the product of breakthrough scientific innovation. Companies acquired labs and start-ups, and worked to pitch AI as a multitool of efficiency and precision, suitable for nearly any purpose across countless domains. When we say AI is everywhere, this is why.

Corporations have essentially colonized the imaginative space that Turing’s paper asked us to explore. Instead of pursuing the limits of computers’ potential for simulated humanity, the hawkers of “AI” are pursuing the limits of human beings’ potential to be reduced to their calculability. One of the reasons that tech companies have been so successful in perverting the original imitation game as a strategy for the extraction of capital is that governments are eager for the pervasive surveillance powers that tech companies are making convenient, relatively cheap, and accessible through procurement processes that evade democratic policy making or oversight.

For institutions, private and public, that have power consolidation as a primary goal, it is useful to have a language for speaking about the technologies that they are using to structure people’s lives and experiences which discourages attempts to understand or engage those technologies. Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize. In 2021, the FrameWorks Institute, in partnership with the MacArthur Foundation, released a research brief describing some of the most prevalent misconceptions about what “AI” is and how it works. What we found most striking at the Privacy Center about their findings was that most of these misconceptions seemed to fall into one of two overarching categories, which are two sides of the same coin: (1) the public does not know what “AI” is and (2) the public assumes that AI is smarter than they are. For example, in conversations with FrameWorks researchers about predictive algorithms, participants focused on the need to give the algorithm “good data,” but appeared not to recognize the algorithm itself as something constructed. At the same time, FrameWorks found that “people think of predictive algorithms as divination, which is the ability to see into the future and know what is going to happen based on the information gathered in the present.”

That we are ignorant about, and deferential to, the technologies that increasingly comprise our whole social and political interface is not an accident. The AI demon of speculative fiction is a super intelligence that threatens to dominate by stripping human beings of any agency. The threat of lost agency is real, but not because computers are yet capable of anything similar to, let alone superior to, human intelligence. The threat is real because the satisfaction of corporate greed, and the perfection of political control, requires people to lay aside the aspiration to know what their own minds can do.

And this is the other side of the question, “can machines think?” which Turing derided as meaningless. To ask the question “can machines think?” is necessarily to ask the question “what does it mean for human beings to think?” Given the facts of Turing’s life, and the violent persecution he suffered for failing to perform his humanity in the prescribed manner, he may have had very justifiable reasons for wanting to cut off our cognitive access to questions like this. But Turing did not envision a society in which computers that still have no capacity to show up to human beings as friends would nevertheless be treated by human beings as authorities. For us today, living in such a society, the question of what it means for human beings to think is a liberatory question. Protecting the possibility for people to ask that question, alone and together, is part of what we at the Privacy Center believe privacy is for.

Being intentional in how we use language is only one aspect of our commitment to integrity in our research, advocacy and communication. But it is a deeply important aspect. In his profound and beautiful essayWhy I Stopped Hating Shakespeare, James Baldwin tells the story of his struggle to relate to the English language as something other than an oppression:

My quarrel with the English language has been that the language reflected none of my experience. But now I began to see the matter in quite another way. If the language was not my own, it might be the fault of the language; but it might also be my fault. Perhaps the language was not my own because I had never attempted to use it, had only learned to imitate it. If this were so, then it might be made to bear the burden of my experience if I could find the stamina to challenge it, and me, to such a test…

This test-- let’s call it “the Baldwin Test”-- powerfully counters the impulses of the Turing Test. Where Turing’s test treats the desire to distinguish between the real and the imitation as a distraction, Baldwin treats it as a North Star.

Our work at the Privacy Center is not literature (though I suspect that a couple of our staff members may be secret poets!). But each of us has had our own powerful experiences of words as world-creating. And to the extent that our words might make certain worlds even a little more or less possible for those to whom we speak and for whom we write, we want to wield them carefully.

This is why we have decided to try to rise to Baldwin’s challenge by removing “artificial intelligence,” “AI,” and “machine learning” from our institutional vocabulary. It is not a statement of creed, or an attempt to establish new taboos, but a creative practice that we hope will support intellectual discipline. What will we use in place of this terminology? While we expect that each new context will challenge us to answer this question in a slightly different way, we have come up with a few guidelines to start with, and we share them here for anyone interested in joining us in this project.

Instead of using the terms “Artificial intelligence, “AI,” and “machine learning,” the Privacy Center will:

(1) Be as specific as possible about what the technology in question is and how it works. For example, instead of saying “face recognition uses artificial intelligence,” we might say something like “tech companies use massive data sets to train algorithms to match images of human faces.” Where a complete explanation is disruptive to our larger argument, or beyond our expertise, we will point readers to external sources.

(2) Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency. For example, instead of saying “employers are using AI to analyze workers’ emotions” we might say “employers are using software advertised as having the ability to label workers’ emotions based on images of them from photographs and video. We don’t know how the labeling process works because the companies that sell these products claim that information as a trade secret.”

(3) Name the corporations responsible for creating and spreading the technological product. For example, instead of saying “states use AI to verify the identities of people applying for unemployment benefits,” we might say “states are contracting with a company called ID.me, which uses Amazon Rekognition, a face matching algorithm, to verify the identities of people applying for unemployment benefits.”

(4) Attribute agency to the human actors building and using the technology, never to the technology itself. This needn’t always require excessive verbiage. For example, we might substitute “machine training,” which sounds like something a person does with a machine, for “machine learning” which sounds like a computer doing something on its own.

We don’t yet know exactly what will happen to our thinking and writing without these crutches, but finding out is part of the point. As Baldwin said:

[The poet’s] responsibility, which is also his joy and his strength and his life, is to defeat all labels and complicate all battles by insisting on the human riddle, to bear witness, as long as breath is in him, to that mighty, unnameable, transfiguring force which lives in the soul of man…

Thanks to the staff of the Privacy Center and to Philip Johnson for helping to think through what this change means for our work, and to David McNeill for helping to think through the underlying ideas. This piece originally appeared on the Center's Medium page.

Authors

Emily Tucker
Emily Tucker is the Executive Director at the Center on Privacy & Technology at Georgetown Law, where she is also an adjunct professor of law. She shapes the Center’s strategic vision and guides its programmatic work. Emily joined the Center after serving as a Teaching Fellow and Supervising Attorne...

Topics