But it also has some unfinished business regulating online platforms, argues Mark MacCarthy, author of the forthcoming book, Regulating Digital Industries,
Since the explosion of ChatGPT about a year ago, it seems that policymakers everywhere want to regulate artificial intelligence. It has become a focus of Congressional attention with Senator Chuck Schumer’s recent AI Insight Forum. This is all to the good, given that many of our troubles with the online world stem from a failure to regulate early on, when real and potential harms of a new technology are already apparent.
But there is a right way to do it and a wrong way. Moreover, in pursuing the shiny new toy of AI regulation, Congress should not neglect the vital task of regulating digital platforms.
For three administrations, the US has followed the principle of regulating the uses of AI, not the technology itself. This comes from taking seriously the conclusion reached years ago by the Stanford University Study Panel of AI experts that “…attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”
In this sector-specific approach, if AI is used, for instance, to assess creditworthiness, the Consumer Financial Protection Bureau must make sure the assessment follows the fair lending laws. If AI is used to make employment or promotion decisions, the Equal Employment Opportunity Commission should make sure it doesn’t discriminate against legally protected classes.
Federal Trade Commission Chair Lina Khan summarized this approach to AI regulation as “There is no AI exception to the laws on the books.”
But more might need to be done than to apply current law. In moving beyond this current approach, Congress might consider what the United Kingdom is doing. In its recent white paper, the UK government also focuses on AI as used in context. But, in addition, the UK government urges its regulators to consider some cross-cutting AI principles including safety, transparency, fairness, accountability, and redress. After a time, the government suggests, it might require regulators to give “due regard” to these principles. But it has ruled out creating a new AI agency to regulate AI as such or attempting the impossible task of regulating AI in all its uses.
My colleague at Brookings, Alex Engler, makes this incremental approach more specific. Instead of requiring regulators to give due regard to vague AI principles, he suggests expanding the powers of existing agencies to enable them to meet the special challenges of AI. This would include enabling them to provide the public with disclosures of AI use, meaningful information about computational processes, access and correction of data, and a right to opt out of algorithmic decision making. New authority might also empower existing regulators to require AI systems to be accurate both overall and for protected demographic groups. Regulators might also be authorized to gain access to training data and the AI model itself.
Congress should move this idea to the top of its agenda. It meets some of the special challenges of AI without establishing a new AI regulator.
But it is not enough to give existing regulators new powers to regulate AI within the areas of their jurisdiction. This is because AI applications can greatly exacerbate unregulated harms such as privacy invasions as well as manipulation and misinformation, especially in the context of political campaigns. Existing laws simply don’t address those challenges.
The answer to those problems, however, is not to regulate the AI models. It is to pass laws addressing these unregulated harms, and in the process to make sure that regulators have the power to prevent them from being exacerbated through the use of AI tools.
There’s some low-hanging legislative fruit for Congress to grab. The American Data Protection and Privacy Act (ADPPA) would establish a national privacy law; it almost made it through Congress last year. The Honest Ads Act would require digital sponsorship identification for political ads, and is another law that has been pending for several years. Platform accountability legislation would mandate transparency, explanations for user takedowns, risk assessments, and access to data for researchers; this common sense proposal also languished in the last Congress. All of these measures would address AI issues in these policy areas. It is timely and urgent to take these measures up in the current Congress.
But there’s also a push for a new digital regulator to address online harms. In my forthcoming book, Regulating Digital Industries, I outline how a digital regulator must be empowered to promote competition, privacy, and good content moderation in the lines of business that are so central to the digital economy of the 21st century. This would include search, social media, ecommerce, mobile app infrastructure and ad tech. A digital regulator would have full authority over AI used to violate these new requirements.
A dedicated digital regulator is an idea whose time is coming. Recently Senators Elizabeth Warren (D-MA) and Lindsey Graham (R-SC) introduced the Digital Consumer Protection Commission Act to regulate digital companies with respect to competition, privacy, transparency and national security. They join Senators Michael Bennet (D-CO) and Peter Welch (D-VT) whose Digital Platform Commission Act would also establish a digital regulator. Both proposals provide the regulator with authority over AI systems used by digital platforms.
Should Congress regulate AI? Of course, and the way forward should be to grant agencies new powers to meet AI challenges.
But Congress can do more than one thing at a time. It has some unfinished business in regulating the industries that are at the core of our digital economy. It should take up the task of establishing a modern, agile regulatory structure for digital industries.
Mark MacCarthy is adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in technology policy including on content moderation for social media, the ethics of speech, and ethical challenges of AI. He is also a Nonresident Senior Fellow in the Institute for Technology Law and Policy at Georgetown Law and a nonresident Senior Fellow in Governance Studies at the Center for Technology Innovation at the Brookings Institution. He served as a professional staff member of the U.S. House of Representatives’ Committee on Energy and Commerce, where he handled telecommunications, broadcasting, and cable issues, and as a regulatory analyst at the U.S. Occupational Safety and Health Administration. He was in charge of the Washington office for Capital Cities/ABC, served as senior vice president for public policy for Visa, Inc. and ran the public policy division of the Software & Information Industry Association. He is the author of Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech (Brookings 2023). MacCarthy holds a B.A. from Fordham University, an M.A. in economics from Notre Dame and a Ph.D. in philosophy from Indiana University.