Home

Donate

Welcome to Session 2 of the 118th US Congress: AI Policy Edition

Anna Lenhart / Jan 3, 2024

Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI / CC-BY 4.0

A year ago– on Jan 3rd, 2023– the 118th US Congress began. Along with several other Congressional technology policy staffers (my job at the time), I flew to Las Vegas for the Consumer Electronics Show (CES). In between sessions, we watched the historic House Speaker election on C-SPAN and wondered what it meant for democracy and, more narrowly, for technology policy.

Personally, I was reflecting on the incredible amount of advocacy that went into attempting to pass comprehensive data protection and competition reform bills. I sat surrounded by the newest sensor and AI-enabled contraptions with the awareness that progress on those issues in the 118th Congress were slim. Despite being surrounded by robots, however, I was unprepared for what happened just weeks later. In February 2023, columnist Kevin Roose's “conversation” with Bing’s Sydney chatbot was published on the front page of the New York Times. After three years in Congress, I had seen tech journalism capture the attention of lawmakers, but this was different.

A few weeks after Roose’s story, many in the news media latched on to the narrative that Congress was behind on AI. I knew the situation was more complicated than that. AI risk is complicated, and Congress had dozens of proposals already in front of it– including landmark bills proposed in the 117th Congress– that were relevant to generative AI. In April, with the help of Tech Policy Press and friends, I started tracking federal legislative proposals that pertain to generative AI.

One year later, the list of proposed legislation is certainly longer. What were the themes from 2023, and what should we look out for in 2024?

1. Privacy, competition reform, and social media policy struggled for attention while lawmakers “started from scratch” on AI.

Advocates worked hard in 2023 to explain the ways bills like the American Choice and Innovation Online Act (S.2033), American Data Privacy Protection Act (ADPPA) (H.R. 8152 [117th]), and the Platform Accountability and Transparency Act (PATA) (S.1876) would provide regulators with tools to address aspects of generative AI risks. But the allure of newness took hold. The majority leader of the Senate believed the challenges posed by generative AI were so novel they required a new expert stakeholder engagement process more “innovative” than public hearings.

There are a few sprinkles of sustained progress. The prominent antitrust bills from the last Congress were reintroduced. A few weeks ago, the Senate Judiciary Committee convened a hearing on market concentration among the firms producing and deploying AI in which witnesses expressed support for existing competition reform bills.

The privacy world knew ADPPA would face headwinds this Congress as the House Energy and Commerce Committee had more turnover than any other committee. New members had to get up to speed on every nuanced compromise underlying the 144-page bill first put forward in 2022. But now they were approaching the text with tools like ChatGPT in mind. Generative AI tools have led to questions about how to tackle sensitive public data and whether its inclusion in training data is an appropriate secondary use. Amid these headwinds, US states are continuing to pass privacy bills and regulations for algorithmic processes– some similar to ADPPA Sec 207: civil rights and algorithms, which mandates algorithm impact assessments– making the compromise between a national standard and select carve-outs to state laws even more complicated.

On the social media front, PATA was reintroduced in June 2023, but most discussions regarding harmful content online centered around the Kids Online Safety Act (S.1409). Lawmakers, however, are aware that social media platforms will be responsible for the dissemination and amplification of content created by generative AI (see next theme).

I'm hopeful that as the craze around task forces, summits, and various other stakeholder convenings in 2023 comes to a close, lawmakers will look for practical policies and return to the importance of competition reform, data protection, and content moderation transparency. I’m not expecting much more than hearings on these topics this session, but to the extent, that hearings can highlight the overlap between these digital markets oversight basics and generative AI, that will be a win.

2. Congress wants consumers to be able to delineate between AI content and “authentic” content.

The first bill to mandate digital watermarks and disclosures was introduced in 2019 by Congresswoman Yvette Clarke (D-NY), the DEEP FAKES Accountability Act (H.R.3230 [116th]). The idea got significant traction this year with several lawmakers introducing their own flavor. I count six proposals that would mandate some form of disclosure related to AI-generated content. There is some variation in the types of content covered within the bills. For example, the AI Labeling Act (S.2691) requires disclosures for AI-generated content compared to the DEEP FAKES Accountability Act which centers on “advanced technological false personation records” regardless of the underlying technology. A few lawmakers have emphasized the importance of disclosures for generative AI used in political ads. Most proposals recognize that creating an information environment in which users can identify AI-generated content will require a set of standards and responsibilities that fall to the developers of generative AI tools, hardware manufacturers, users of generative AI tools, and online platforms that disseminate content. The POST ID Act (S.3003) takes a slightly different approach by allowing the United States Postal Service to conduct identity verification, providing a pathway for some less technical authentication.

Notably, the 2024 National Defense Authorization Act (H.R.2670) that became law January 1, 2024 includes Section 1543 creating a “Generative AI Detection and Watermark Competition” to encourage research and testing of technical standards needed for disclosures. Meanwhile, international standards bodies and industry groups are actively working on the technical standards needed to facilitate these disclosures and will continue to do so in 2024. Once the standards are ready (a non-trivial endeavor), governments worldwide will likely mandate that key stakeholders abide.

3. Congress is considering “hub and spoke” approaches to assessing AI risks and benefits.

When conceptualizing AI policy, one challenge is that risk assessments and mitigations need to be applied both to the underlying technologies (emotion recognition, foundation models, predictive scoring, etc.) and to the context in which the model or application is deployed (such as in schools, vehicles, employment decisions, or criminal justice). This approach requires regulators to have a breadth of technical and subject matter expertise. One way to imagine oversight requirements that can address such complexity is as a “hub-and-spoke” where a set of provisions– disclosure mandates, testing, certification of auditors, etc.-- fall to a new or existing agency “hub,” while context-specific mandates and initiatives fall to sector-based agencies such as the Departments of Education, Housing and Urban Development, Transportation, Defence, and the Food and Drug Administration. Regulation applied by these “spokes” that can be shaped in concert with experts who understand how the deployment of AI systems could benefit or harm stakeholders within each specific sector.

Lawmakers have put forth “hub” proposals that run the gamut from creating a large new agency to the re-introduction of the Algorithm Accountability Act (AAA) (S.2892), which creates a 25-person “Bureau of Technology” within the FTC. At the core of these proposals is the idea that government agencies need capacity to ensure that a subset of high-risk AI systems thoroughly document their data curation and process for assessing and monitoring risks. The newest proposal in this vein is the Artificial Intelligence Research, Innovation, and Accountability Act (AIRIA) (S.3312), which directs the Commerce Department to set up infrastructure to oversee self-certification of critical-impact artificial intelligence systems.

Lawmakers have also considered ways AI systems can be advanced or restricted under existing sector-specific agencies–focusing on “spokes.” For example, the Republican staff of the Senate Health, Education, Labor and Pensions (HELP) Committee released a report in September arguing that “a sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.” The report also explores the risks and benefits of AI applications in medicine, education, and workforce. Congress has also introduced sector/use case-specific proposals such as investing in the use of AI systems to solve challenges like algal blooms (S.3348), border security (H.R. 6391), and rail safety (H.R.5871). And, lawmakers have proposed ‘bright lines,’ or use cases that, regardless of risk assessment results, are banned, like “materially deceptive AI-generated audio or visual media” in elections (S.2770) or the use of AI in autonomous nuclear weapons (H.R.2894).

In 2024, I expect to see more hearings, proposals, and speeches outlining ways to structure processes for AI system risk assessment. I’ll be keeping a close eye out for proposals that require pre-market approvals for AI systems, which is one idea being considered in other nations that is currently missing from the dialogue in Congress.

3. Congress (and the President) are using procurement as a vehicle for safety standards.

Advocates have pushed the idea of including algorithmic impact assessments as a requirement for companies that deploy AI systems within the government as a way to incentivize better testing and documentation. The concept was included in the Advancing American AI Act (S.1353 [117th]) enacted in the last Congress. The idea received a bolt of energy this fall when the White House released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) and subsequent OMB guidance, which mandates that procured AI is evaluated and that agencies obtain documentation such as model cards.

Prior to EO 14110, Sen. Gary Peters (D-MI) introduced the AI LEAD Act (S.2293) which mandates that agencies create a position for a “Chief Artificial Intelligence Officer” who then serves on an interagency council. The council must inform a strategy ensuring that procured AI systems “safeguard the rights and safety of individuals.” After the EO was released, Sen.Warner (D-VA) and Sen. Moran (R-KS) introduced the Federal Artificial Intelligence Risk Management Act (S.3205) which directs the National Institute of Standards and Technology (NIST) to work with the Administrator of Federal Procurement Policy to provide standards “which a supplier of artificial intelligence for the agency must attest to meet before the head of an agency may procure artificial intelligence from that supplier.” This bill even includes draft contract language.

The 2024 NDAA also includes AI risk assessments and testing for AI developed internally or procured by the Department of Defence (DoD). Specifically, Section 1521 outlines the creation of a “Chief Digital and Artificial Intelligence Officer Governing Council” (made up of undersecretaries of defense) to“ensure the responsible, coordinated, and ethical employment of data” and AI, including overseeing “guidance on ethical requirements and protections for the use of artificial intelligence supported by Department funding and the reduction or mitigation of instances of unintended bias in artificial intelligence algorithms.” And Section 1544 requires the DOD to assess whether “a given artificial intelligence technology used by the Department of Defense is in compliance with a test, evaluation, verification, and validation framework.”

In short, the language in the 2024 NDAA (and EO 14110) has the potential to push the industry further to develop and implement risk assessment standards.

2024: Government Funding and Standards

Many of the ideas outlined above hinge on the US government committing enough resources to develop and implement standards and frameworks for assessments, disclosures, documentation, monitoring, etc, in a way that is multi-stakeholder and in line with its international partners. This will be hard work, and will require resources for NIST and every agency using and procuring AI systems.

It is one thing for Congress to authorize funds (or say that funds can be spent in a certain way), but ultimately, 2024 will be a year we see what Congress actually appropriates, allowing agencies to spend money from the Federal Treasury on AI priorities.

Funding alone will not result in a US version of Europe’s EU AI Act, but it could fuel progress on standards. An overarching trend across nearly every bill on the tracker is the need for NIST to engage on standards. The way standards are referenced in the 2024 NDAA and bills like the AAA, AIRIA are similar to “pre-requisites.” They must be in place before meaningful regulatory mandates or audit regimes can be enforced. While NIST is working on many of these “pre-recs” now more resources are needed.

The other benefit of having recognized standards and documentation processes is that they can be referenced in state laws. Say, theoretically, the next Congress is unable to pass technology legislation. States could enforce mandates around standards, and as long as states stick to the recognized (ideally global) standards we could avoid insurmountable patchwork. I’m not advocating for this approach only pointing out that standards open doors for effective and consistent regulation without the US Congress.

Additionally, NIST relies on academic scholarship when engaging in standard setting and compiling frameworks. Currently, there are open questions on how best to test and monitor AI systems, Congress has a few proposals specifically to address the need for research in this area (such as the TEST AI Act of 2023 (H.R.3162), Ensuring Safe and Ethical AI Development Through SAFE AI Research Grants (H.R.6088), and the CREATE AI Act (H.R. 5077)). The 2024 funding bills may provide vehicles for investments in research.

Similar to this time last year, the path forward for legislation–on any issue– is rocky. Regardless, I am grateful for the handful of lawmakers and their staff working to get key provisions into funding bills and move the needle on these important issues. I wish this community success in 2024, and as always, my team at the Institute for Data, Democracy & Politics will keep the tracker going through Session 2 of the 118th Congress.

Authors

Anna Lenhart
Anna Lenhart is a Policy Fellow at the Institute for Data Democracy and Politics at The George Washington University. Most recently she served as a Technology Policy Advisor in the US House of Representatives.

Topics