Home

Why AI Policy Needs a Sociotechnical Perspective

Brian Chen / May 29, 2024

May 15, 2024: At a podium in the Capitol, Senate Majority Leader Chuck Schumer (D-NY) announces the Senate AI Policy Roadmap, flanked left to right by Sen. Todd Young (R-IN), Sen. Martin Heinrich (D-NM), and Sen. Mike Rounds (R-SD). Source

Amid the near-universal censure by civil society groups of the legislative “roadmap” released earlier this month by the US Senate AI Working Group—it was described as “completely devoid of substantive [civil rights] recommendations” and a risk to “consolidat[e] power back in AI infrastructure providers”—one particular criticism has gone largely undiscussed: the roadmap fails because it ignores the sociotechnical dimensions of AI. If Senate Majority Leader Chuck Schumer (D-NY) and his colleagues are advancing unsatisfactory policy solutions, it is due not only to the frustrations of seeking bipartisan consensus or the lobbying might of tech industry titans. It’s also because they see AI, first and foremost, as a technical breakthrough for which they must safeguard and advance its engineering innovations.

Such a view is perilously incomplete. To advance meaningful solutions, legislators must understand AI as part of a larger “sociotechnical system.” It is that broader system, rather than just the technical artifact itself, that must be the focus of policy analysis.

As my colleagues and I detail in a pair of just-released policy briefs, a “sociotechnical” approach means viewing society and technology together as one coherent system. This perspective recognizes that the performance, effectiveness, and downstream consequences of technologies derive neither from technical design nor social dynamics in the abstract, but from the real-world interplay between the two. It also encourages multidisciplinary expertise—not just that of hard-science technologists—to measure and evaluate AI’s impacts.

While certain governance efforts generally reflect this approach—most notably the White House’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s newly-launched Assessing Risks and Impacts of AI (ARIA) program—the Senate AI Roadmap is an example of policymakers going a different route. In doing so, they miss significant opportunities to shape AI policy in ways that would serve the interests of the public, not just industry.

Consider the roadmap’s conceptualization of AI’s “risks.” To safeguard against these risks, the AI Working Group encourages Senate committees to “consider a resilient risk regime that focuses on the capabilities of AI systems[.]” If their phrase is somewhat ambiguous, surrounding language helps to key us in on the Senators’ intent: “focusing on technical specifications such as the amount of computation or number of model parameters.” Accompanying policy recommendations specify “the development and standardization of risk testing and evaluation methodologies and mechanisms, including red-teaming, sandboxes and testbeds, commercial AI auditing standards, bug bounty programs, as well as physical and cyber security standards.”

There is, strictly speaking, nothing objectionable to what the Senators propose. The problem is what’s missing. AI’s harms are due not only to the technology’s technical capacity. Indeed, ordinary technologies can produce the same harm as those on the cutting edge. The benefit of viewing AI from a sociotechnical perspective is that it brings into focus the harms (and innovations) that result from how the technology interfaces with the systems around it—the interplay between AI and labor practices, working conditions, social relations, decision-making power, et cetera. Advanced technical specs matter, but a sociotechnical perspective cautions policymakers to take them with a grain of salt; what really matter are the technical and societal dynamics that together make the system operate as it does, and produce the outcomes that it does. A capabilities-focused approach indexed to technical standards and methodologies, like the one recommended by the AI Working Group, misses that.

Take another of the roadmap’s problems. Over and over again, the Senate AI Working Group unquestioningly accepts the marketing promise of AI’s ability to solve complex social problems. There are breathless descriptions of AI’s ability to “fundamentally transform the process of science, engineering, or medicine.” This sort of techno-solutionism is not simply political theater; it means the roadmap adopts a blinkered perspective that overlooks much about how innovation actually occurs.

For all its recommendations for legislative committees to “promote innovation of AI systems that meaningfully improve health outcomes and efficiencies in health care delivery,” the Senate AI roadmap pays very little attention to the labor of healthcare workers. When it comes to deploying new tech in the workplace, such labor is usually marginalized, if not made invisible, and instead the technical advancements of the machine get the credit for new breakthroughs. A sociotechnical perspective recognizes that new technologies are always just one part of any innovation that occurs, and that it takes serious (usually uncredited) effort by workers to “integrate” new technologies into existing systems effectively. This understanding should also call into question the automatic assumption that powerful technologies inevitably automate jobs and displace workers, and that policymakers’ only tool in response is worker training and upskilling (a trap the AI Working Group falls into). If lawmakers want to “harness the full potential of AI,” their attention would be better aimed at improving the working conditions of those harmed by algorithmic management, employer surveillance, and work intensification, and not tilting at the windmills of automation.

Since the roadmap’s release, Senator Schumer—in a possible acknowledgment of the roadmap’s lukewarm reception—convened Senate committee chairs to urge them to advance AI legislation this session. That’s necessary and encouraging. All the same, the Senate AI Working Group roadmap is not just a highly-visible misstep among the highest ranks of government; it’s emblematic of how some policymakers misunderstand AI, treating it strictly as a technical artifact for which technical safeguards are the sole solution. Lacking a sociotechnical perspective, policy interventions on AI will inevitably fall short.

Authors

Brian Chen
Brian Jonathan Chen is the Policy Director at Data & Society. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, social equality, and peoples’ self-determination. Before joining D&S, Brian was a Senior Staff Attor...

Topics