Home

Donate

Proposing the CASC: A Comprehensive and Distributed Approach to AI Regulation

Alex Engler / Aug 31, 2023

Alex C. Engler is a Fellow at the Brookings Institution and an Associate Fellow at the Center for European Policy Studies, and teaches AI policy at Georgetown University, where he is an adjunct professor and affiliated scholar.

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

Algorithmic systems are used to make many critical socioeconomic determinations—including in educational access, job discovery and hiring, employee management, financial services (such as mortgage pricing and property appraisal), rent setting, tenant screening, medical provisioning, and medication approval. This proliferation of algorithms is a defining issue of modern economic and social policy, with demonstrated implications for income equality, social mobility, health outcomes, and life expectancy.

Yet while the adoption of algorithms is nearly universal, the specifics of each application—the type of algorithms used, the data they manipulate, the sociotechnical processes they contribute to, and the risks they pose—vary greatly. Hiring algorithms have little in common with healthcare cost estimation algorithms or property valuation models. The role of algorithms in socioeconomic determinations is so manifold and diverse that it is not feasible, or even desirable, to create one set of algorithmic standards.

Therefore, a defining challenge of governing algorithms is enabling a regulatory approach that is comprehensive, but still enables application-specific rules and oversight by domain experts. In a new Brookings Institution report, I propose a novel solution to enabling comprehensive AI regulation through application-specific rulemaking.

Proposing the Critical Algorithmic Systems Classification (CASC)

To address this challenge, I am proposing a new regulatory tool, the Critical Algorithmic System Classification, to be paired with expanded investigative powers for key regulatory agencies. The CASC would be a new regulatory designation, applied though the U.S. federal rulemaking process, which would lead to legally binding rules for a specific type of algorithm. Through the rulemaking, an agency would need to demonstrate that a type of algorithmic system meets certain criteria and would then be allowed to set required standards—related to disclosure, transparency, correction of inaccurate data, efficacy, non-discrimination, and others—for that type of algorithmic system.

The CASC isn’t meant to govern just any algorithms, as that would be a wild expansion of the scope of government. Instead, it should be limited by a set of three criteria.

First, the CASC isn’t meant to regulate relatively trivial applications, like generative AI for fashion design or a recommender system for movies. For an algorithm to receive the CASC designation, it should risk harm to health care access, economic opportunity, or access to essential services—like those mentioned in the opening of this op-ed.

Second, the CASC is meant for algorithms that present broad societal risks, and so the designation should be limited by the extent of their impact. This could mean only algorithms that impact a certain number of people (perhaps in the hundreds of thousands) or affect a high proportion of a specific type of people, such as a protected class or people with a specific occupation (e.g. teachers).

Third, not any agency could regulate any algorithm. Agencies would be limited to using the CASC process on algorithms that impact processes that the agency already has congressionally delegated authority over. This means that Housing and Urban Development would regulate algorithms in housing, Department of Labor in employment, and so on.

Collectively, these criteria mean that agencies would be narrowly empowered to regulate algorithms that affect critical socioeconomic determinations at a large scale in sectors that have traditionally been governed, but not go beyond that.

To make the CASC effective, regulatory agencies would also need investigative powers so that they can accurately understand and oversee the algorithms they are writing rules for. Paired with the CASC, these agencies would be granted administrative subpoena authority, enabling them to collect data, code, models, and technical documentation from companies. The agencies would use these subpoenas to comprehensively review a type of algorithm to determine if it meets the criteria of a CASC system. Once a type of algorithm was regulated through the CASC rulemaking process, the agency would continue to use the subpoena authority to monitor those covered algorithms for compliance. These two authorities would complement one another in comprising an effective AI regulatory regime.

Considering the CASC Approach

The CASC approach has a lot of admirable qualities, but it is far from perfect. Let’s first consider a few advantages.

The CASC employs two well-established authorities—administrative subpoenas and federal rulemaking—and so while it is a novel approach to governing algorithms, it is not untrodden ground. Further, where current legal protections are sometimes limited to the deployer of an algorithm, the CASC would let regulators enforce rules on the vendors, too. In many markets, the vendors are the most important players in the design and development of algorithmic systems, and putting requirements on them is also the point of minimal interference in the market. The CASC is also relatively “future-proof”, meaning it enables continuous adaptation by government agencies to emerging algorithmic systems. It also has a functional exemption for small businesses creating new algorithmic use cases, since they will not immediately qualify under the extent of impact criteria.

The CASC approach is also preferable to a new regulatory agency for algorithms. A new algorithmic regulator would lead to the creation of two parallel regulatory mechanisms—one for human processes governed by preexisting sectoral regulators, and one for algorithms governed by the new regulator. Over time, as the role of algorithms increases, the workload of the central regulator would expand, while that of the sectoral regulators would shrink, creating a long-term imbalance. Instead, the CASC lets sectoral regulators take the lead, which will also benefit from their existing domain expertise.

These are meaningful advantages, but the CASC has significant shortcomings, too. To be blunt, I do not believe it is the best possible intervention for the problem described here, and neither did most of the many AI experts I consulted with when drafting this proposal. The best idea is to systematically update each of our civil rights and consumer protection laws to confront the digital and algorithmic age. From a strict policy perspective, that’s the better approach. Yet, seeing that the systemic updating of most legal protections would be a politically challenging and fraught endeavor in the U.S., the CASC approach may function as a workable and effective alternative.

The CASC is also limited by the length of the regulatory process, which can easily take years. What’s more, the CASC is inherently retrospective, with regulations lagging behind new, and potentially harmful, algorithmic applications. One potential solution would be to pair the CASC with a more general rights-based approach to ensure that all algorithms meet a few universal characteristics—such as disclosure, non-discrimination, and honesty in advertising. Paired with a private right of action, this could provide a basic level of universal algorithmic rights, while CASC rulemaking would be aimed at setting specific and higher standards for especially impactful systems.
Despite its drawbacks, the CASC approach would enable needed AI regulations within clear limitations and without upending the current function of the federal government. With Congress turning its head towards AI this fall, the CASC approach warrants serious consideration to ensure effective AI protections far into the future.

Read Engler's full proposal at the Brookings Institution.

Authors

Alex Engler
Alex C. Engler is a Fellow at the Brookings Institution and an Associate Fellow at the Center for European Policy Studies, where he examines the societal and policy implications of artificial intelligence. Previously faculty at the University of Chicago, Engler now teaches AI policy at Georgetown Un...

Topics