Home

Donate

Roundup of Federal Legislative Proposals that Pertain to Generative AI: Part II

Anna Lenhart / Aug 9, 2023

Anna Lenhart is a Policy Fellow at the Institute for Data Democracy and Politics at The George Washington University. Previously she served as a Technology Policy Advisor in the US House of Representatives.

Anton Grabolle / Better Images of AI / Classification Cupboard / CC-BY 4.0

While the US Congress is now in recess, the fall promises to be busy with discussions around potential regulation of artificial intelligence, including a series of hearings promoted by Senate Majority Leader Chuck Schumer (D-NY) to generate new legislation.

There is already a significant body of proposed legislation pertaining to AI. In April, I crowdsourced a list of bills from the 117th Congress pertaining to generative AI. And in June, I published an analysis of those proposals, including what risks they aim to address and considerations for lawmakers moving forward.

Similar to the list circulated in April, I have compiled a list of proposals introduced to date in the 118th Congress that aim to address risks from generative AI. Because there are still 18 months left in this Congressional Session, this document will remain dynamic, and I will do my best to keep up with re-introductions, new proposals, and mark-ups/amendments. (I invite Tech Policy Press readers to reach out should I miss something).

Congress is not “starting from scratch” on AI regulation. Therefore, the 118th list includes a “relation to previous work” section under select bills that explains how the proposal builds on past legislation and agency initiatives.

So, how are things unfolding so far?

The major shift from the 117th to 118th Congress is the rise of “near-term risk” framing and “existential risk” framing. Bills from the 117th focused on data harms broadly: market power, privacy violations, discrimination, the proliferation of misleading content, etc. Inevitably, the proposals covered various technologies that process data–search engines, generative AI tools, and even AR/VR platforms. Existential risks are more speculative and while I imagine a few Members last Congress spent time thinking about where all this “data processing” could end up decades from now, tools like ChatGPT (along with the discourse surrounding them) have managed to capture lawmakers' imaginations and ignite a need to address speculative risks.

So far, bills aimed at speculative risks fall into two categories. The first is product design considerations, which last Congress included bills to outlaw targeted advertising and dark patterns. This Congress, the category is expanding to include bright-line restrictions such as blocking funding for autonomous nuclear weapons This trend furthers Congress’ attempts to bring more traditional product safety approaches to AI governance.

Second, a new category of bills titled Councils, Commissions, Reports, and Task Forces has emerged. This category of proposals is shaping up to be a hodgepodge of proposals ranging from directing the executive branch to do what they are already doing (NIST frameworks, recommendations, etc) to building the interagency capacity needed to regulate general purpose (cross-jurisdiction in government speak) AI to mandating that agencies research and prepare for specific speculative risks (AI developing bioweapons).

Additionally, several Members have reintroduced bills from the last Congress. While many have remained unchanged, a few adjustments to definitions seem to be in response to generative AI’s popularity. For example, Senator Michael Bennet (D-CO) added “including content primarily generated by algorithmic process” to the definition of Digital Platform in the Digital Platform Commission Act of 2023. Perhaps most notably, the Senate Commerce Committee narrowed the definition of "Covered Platform" in the Kids Online Safety Act to focus on social media sites instead of any platform that “connects to the internet.”

Definitions are one of the most challenging parts of legislative drafting. On the one hand, Members want bills to be “futureproof” and have definitions that are broad enough to capture the platforms of today and tomorrow. Simultaneously, the bill's provisions must be reasonable for every entity captured in a definition and be tailored to the harms Congress aims to address. Specifically, stand-alone generative AI tools do not disseminate user-generated content, tools like ChatGPT and Midjourney create content that users post on social media platforms. If provisions in a bill are aimed at content dissemination, it may not make sense to cover stand-alone generative AI tools. However, Congress could consider writing broad requirements and giving an agency like the FTC the ability to write rules that are flexible and vary with the “size and scope” of platforms. Of course, this approach requires a court that favors agency rulemaking authority (so… not this Supreme Court). Regardless, definitions are hard and certainly an area to watch (Marissa Gerchick’s termtabs is a helpful site for tracking tech bill definitions).

In summary, many members are continuing the grind, and are committed to getting Americans even basic online protections. Some are looking to investigate the risks of AI and to start drawing bright lines today to prepare for a future with more powerful applications. Will any of these proposals (new or old) make it to the President’s desk? Normally a few would make it into the National Defense Authorization Act (NDAA), but going into an election year the prospects for this Congress are unclear, even for the NDAA. Time will tell.

Authors

Anna Lenhart
Anna Lenhart is a Policy Fellow at the Institute for Data Democracy and Politics at The George Washington University. Most recently she served as a Technology Policy Advisor in the US House of Representatives.

Topics