The Current Congress Has a Stopgap Role to Play in AI Regulation
Jeremy Straub / Nov 18, 2024Jeremy Straub is an assistant professor of computer science, senior faculty fellow at the Challey Insititute, and the associate director of the Institute for Cyber Security Education and Research at North Dakota State University.
Just before the November 5 election, the Biden administration released a memorandum that lays out a multi-agency strategy for federal AI national security policy. The memorandum, announced as “the first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI),” has three key tenets: promoting AI development, using AI for national security, and developing governance.
However, unless President-elect Donald Trump decides to follow his predecessor administration’s plan, the most notable outcome of this document may be creating a working group between the Department of Defense and the Office of the Director of National Intelligence. This has a thirty-day deadline, which is within Biden’s remaining time in office. Other objectives—with deadlines pegged at ninety days and beyond, however beneficial—will likely be replaced with the next administration’s policy and priorities.
Unlike laws, which are effective unless repealed, changed, or struck down as unconstitutional by the courts, presidential executive orders, and memorandums can be readily altered by any future president. This is hardly an effective way to regulate one of America’s most important industries. While Congress could make decisions about the future of AI and its regulation, these may be complex topics to gain bipartisan agreement regarding. Instead, Congress can create a framework to foster AI growth, coordinate and deconflict state regulations, and resolve critical intellectual property issues.
To foster growth, Congress should prevent states from regulating AI models themselves. Regulating a model’s speech output may run afoul of their developers’ First Amendment rights (and thus be already proscribed). However, Congress can remove model regulation for any models involved in interstate commerce, based on the Commerce Clause, from the purview of state regulation. This does not take a position regarding the eventual regulation (or not) of these models; it just prevents states from creating a patchwork of difficult-to-navigate rules related to them. Notably, this does not (and should not) prevent states from regulating the use of AI within their jurisdiction—it just keeps them from regulating the development of the core technologies underlying numerous uses.
To prevent regulatory confusion, Congress should rein in extraterritorial AI regulation, preventing states from regulating entities in other states based on their citizens’ interactions with the entities (or other nexuses) unless expressly authorized. Like with internet sales tax legislation, Congress has a critical regulatory role. This process could begin with a ban on extraterritorial regulation and set up a framework—through agency or Congressional action—for creating exceptions when states develop harmonized regulations.
Finally, Congress should resolve lingering controversies regarding AI copyrights and patents. Congress should expressly legislate that AI works can be copyrighted and patented while giving the Copyright Office and Patent and Trademark Office the authority to create mechanisms to rate throttle (such as by using an escalating fee scale) AI submissions to prevent excessive submissions and resolve other potential issues. Preventing contentions that AI played too much of a role in the creation of a work or development of technology as a way to attack the validity of the copyright or patent is critical to encouraging the use of AI in their development.
Facilitating the growth of AI, whether through preventing the regulation of models or through enabling the copyright and patenting of AI creations, may change the landscape for human inventors and creators. Humans having exclusive access to the patent and copyright systems inherently makes human creations more valuable because of the ability to protect them. AI growth may also have a negative impact on jobs, at least in the short term. Delaying these reforms, while perhaps creating a short-term benefit for human employment in certain areas, just slows the inevitable change in society that AI is causing while risking national competitiveness and delaying the ability of others to benefit from these technologies.
The introduction of numerous past technologies has raised concerns regarding their impact on human engagement. While society has changed over time, technologies have had a largely beneficial effect. They have enhanced human health and education, reduced the time required for attaining sustenance, allowed focus on other activities, and facilitated global communications and entertainment.
Simple enabling regulations in these three areas can help foster AI development in the US by reducing the potential for conflicting state regulations, preventing states from impairing core technology development through their rules, and encouraging the use of AI for technology development and other creative activities. While numerous different areas of regulation (and intentional deregulation) may be desirable to some, these basic regulations should be developed to be non-objectionable as a starting point. They may be all that is possible in the immediate future.