Home

Of Legislators and Large Language Models

Justin Hendrix / Mar 4, 2023

Audio of this conversation is available via your favorite podcast service.

How will so-called "generative AI" tools such as OpenAI's ChatGPT change our politics, and change the way we interact with our representatives in democratic government? And how might it change the way legislators and other government officials do their jobs? This episode features three segments, with:

  • Kadia Goba, a politics reporter at Semafor and author of a recent report on the AI Caucus in the U.S. House of Representatives;
  • Micah Sifry, an expert observer of the relationship between tech and politics and the author of The Connector, an excellent Substack newsletter on democracy, organizing, movements and tech, where he recently wrote about ChatGPT and politics;
  • Zach Graves, executive director of Lincoln Network, and Marci Harris, CEO and co-founder of POPVOX.com, co-authors with Daniel Schuman at Demand Progress of a recent essay in Tech Policy Press on the risks and benefits of emerging AI tools in the legislative branch.

What follows is a lightly edited transcript of the discussion.

Kadia Goba:

Kadia Goba. I am a politics reporter at Semafor, a new global outlet.

Justin Hendrix:

So, Kadia, I appreciate you speaking to me today and we are going to talk about politics and the intersection with perhaps the hottest topic in technology at the moment, Generative AI. You had a great piece in Semafor, "The Caucus Trying to Prevent AI-pocalypse." Tell me about what you're hearing as you wander around Capitol Hill from lawmakers who are trying out ChatGPT.

Kadia Goba:

Yeah, so that's exactly it. It's a new technology that Congress who I think the average age is about 65 on the house side. They're trying to understand what this new technology is, how they can incorporate it, but more importantly how they can regulate it when things possibly go off the rails. We saw this happen with crypto and their inability to regulate it and now I think they're looking ahead. Although the rate at which AI is advancing, they're probably a little slow to this and it'll be interesting to see how they bear or how they try to catch up.

Justin Hendrix:

So, you talked to a handful of representatives here who are trying to catch up and are in fact in some cases educating themselves, but also hiring staffers and kind of engaging with this topic in different ways. Who did you talk to and what are they up to?

Kadia Goba:

Originally when I started looking at AI and how it relates to Congress, I found this AI caucus and I thought, oh, that's interesting. I'd never heard of it before. I've been covering congress since 2019. First person I reached out to was Rep. Don Beyer (D-VA) because he had this great article in the Washington Post that said at 72 years old he was going back to college to give his master's degree in Artificial Intelligence. So, I called him up, super happy to speak, and then I understood that apparently this caucus started with five people a couple of years ago and now it's at 30-something, maybe 34. I have it in my reporting. And essentially there's one person, Representative Jay Obernolte (R-CA) from California. He also has a master's in AI. But in general, these are people who are members of Congress, who are interested in the technology, who want to learn more about the technology, who understand that they are very behind on the technology and want to keep up.

And they want to be able to talk to… there are 435 members in the House. They want to be able to communicate what this technology can do and how they can, like I said, regulate it beforehand. Beyer was extremely helpful in letting me know that some of the caucuses', and Obernolte as well, some of their agenda is to, like I said, educate people, but they also want to bring people internally. They want staffers who are familiar with it so they can play with the technology which is already happening. They want to, like I said, regulate it and then they want to, there's one big push about a committee that Representative Lieu (D-CA) from California ...

They understand that they're probably not going to have the capacity to do all the regulation and that they're going to need something like an FEC or an agency that actually takes control of it. But before they do all of that, they need a commission, congress, a commission to study the commission or to study how they should go about doing things. So, I say all that to say they're at the very beginning stages is about a couple of dozen members of Congress who, like I said, are interested in technology and I want to see where it goes from here.

Justin Hendrix:

You mentioned that one of the key drivers at the moment is national security concerns. Can you speak to that a little bit? Of course, most of the executive branch sort of focus on AI has been related to either national security, but more recently we've also seen this sort of blueprint for AI. I wonder if there was any kind of awareness either of that effort or of some of the related efforts around national security issues.

Kadia Goba:

So, they understand that the technology is there, their best bet is, and what the caucus was saying is having hearings so that they understand what the advantages and disadvantages are with relation to national security. Now, a good indication about why they're focused on that or that they're focused on that is one they said so. And two, some of the committees that are interested and that I've understood and are going to have hearings on AI and specifically related to ChatGPT are Homeland Security, Infrastructure, House Armed Services. They understand the implications around national security. They just need the knowledge around it and they don't have that. I talked to a couple of people on background who said some of their members are just, they have no idea how this is going to impact and internal meetings that they've had are just discussing that they should actually know about it. So, really, again, I can't stress that this is at the beginning stages. You probably know more about the technology and how it impacts national security, but they are learning that on the fly.

Justin Hendrix:

One of the things that you point out in your piece is that the lawmakers are at least somewhat aware that they're kind of behind the ball, at least in respect to the EU and where the EU is on AI legislation. Do you feel like they fell competitive somewhat that perhaps they've been beat to the punch by lawmakers across the ocean?

Kadia Goba:

Yeah, so that's a good question. I think one of their biggest concerns is actually China. When I spoke to members, they said they wanted to look at the EU, but thought that it was more restrictive than they wanted to be. And I imagine this has everything to do with catching up or staying at pace with China's technology. One of the big things or one of the repeated concerns I heard was China can't beat them or they don't want China to steal their technology as it relates to AI, which brings us back to the whole national security concern. So, from what I'm hearing is yeah, they'll look at the model in the EU, but probably because everything is a race, they probably won't be as restrictive as this new model. Which again isn't formalized yet. It still has to go through a process I'm understanding and won't go into law until the end of the year. But I think they're looking at that in terms of some kind of guardrails, which we have none now.

Justin Hendrix:

Any response to your article that you might like to share? Do you think the house caucus on AI grew as a result of reading this or I don't know anything at all you want to get across?

Kadia Goba:

I thought it was interesting that some of the members I talked to were open about some of their colleagues not having any interest in this. And I think those are some of the people they want to reach out to. So, they understand that this is not about, like I mentioned in my reporting robots and lasers or red lasers. After the piece published, I found it interesting that people started to reach out members of Congress that did not get back to me in time and wanted to introduce or talk about the technology. I also thought it was interesting, I didn't mention the piece, but the Senate had an AI caucus and it kind of disappeared, but they said they were going to be bringing it back this year. But probably the most fascinating thing where people on the outside will appreciate is how offices are trying to incorporate AI, especially ChatGPT.

We've already had members like Jake Auchincloss, he was one of the first members to give a speech that was created through ChatGPT on the house floor. And some staffers telling me that they were responding to their constituents using ChatGPT. One of the more funnier anecdotes is one staffer, I think it was Beyer's office, said that he tried to write an op-ed piece and it just wasn't good, but when he tried to write a piece or write a piece that would pitch the op-ed that was actually better than the op-ed. Which would probably make his job obsolete because that is his responsibility to pitch his boss's talking points. So, I thought that was very interesting and I actually did talk to some of the, what do you call it, opposition research people. It's not in the piece, but I talking to op-ed research people who are also incorporating AI or a model of ChatGPT into how they process information.

Justin Hendrix:

That's interesting. Can you tell me anything more about that, their experiments with that?

Kadia Goba:

Sure. So, I'm hoping to possibly write something down the line. It's not as sophisticated as what open AI did, but they're using models from whatever free softwares online where a lot of tech people have access to and they're using it not for gathering, but how you analyze information once you have it. So, after researchers gather all the information, they're dumping it into some kind of thing and then the algorithm then analyzes it and organizes it so then they can start pitching it to individual people. I'm looking forward to writing that piece, but I thought it was very interesting and I could tell you American Bridge is actually using that right now. It'll be interesting if they use ChatGPT, but he's working on a separate model that kind of curates the information specific to campaigns.

Justin Hendrix:

I've talked to multiple people in journalism communities that think about tech enabled journalism, AI enabled journalism, and I know that ChatGPT is particularly good at structuring information, writing little bits of Python, doing all other kinds of tasks that are necessary to help you structure large amounts of data from let's say an Excel sheet or some other kind of unstructured data that you want to use in your reporting. So, it makes total sense that folks are doing that. I'm just curious, one of the things about Semafor is you're able to add your view, so maybe I'll add your view before we close this out. Are you excited about these technologies? Do you intend to potentially use them in your reporting process and are you at all kind of concerned about getting lots of automatically generated pitches? I'm sure your inbox is already overflowing with PR material.

Kadia Goba:

Oh, that's a good question. I think we're way off in terms of the analysis, the analysis part of this technology, and it's probably going to be very advantageous for reporters for research. But in terms of sussing out the information and understanding what is appropriate for the specific article you're writing on, I think that still needs a human touch and I hope that still needs a human touch. And no, I don't think they can compete with me in terms of Kadia's view. So, that's a very, very good thing.

Justin Hendrix:

We won't expect you to be replaced certainly anytime soon. Hopefully that means we'll have the opportunity to come back to you and see what this caucus is up to in six months or a year.

Kadia Goba:

For sure.

Justin Hendrix:

Thank you so much.

Kadia Goba:

Thank you for having me.

- - -

Micah Sifry:

My name is Micah Sifry. I am the current the publisher of my own Substack newsletter, The Connector, which focuses on democracy organizing technology, built on top of years of having run the Personal Democracy Forum, annual conference on tech and politics and Civic Hall, New York City's hub for civic tech.

Justin Hendrix:

Micah, I'm so glad to speak to you today about this column you had in your Substack how ChatGPT will transform politics probably for the worst. So, you're one of a number of voices who are thinking about right now the implications of large language models on politics, how these various technologies which have achieved extraordinary propagation in a very short period of time may change our politics. This column reads like your first thoughts, your first observations. It sounds like you've been playing with ChatGPT, what do you think?

Micah Sifry:

Well, I am worried that it will in a way like Google did before basically reorganize how we get our hands on information. And because it's so user-friendly that we won't realize that the garbage that went into it to make it quote intelligent will come out the other end without people being aware that there's all this massaging and editorial decision making about what you will be allowed to see or not allowed to see that is actually being done off-stage by the designers, the coders, the people at OpenAI, the company that has created ChatGPT. So, once again, we'll find ourselves in effect legislated into a new world that we had no vote on. Let me give you a concrete example of what I mean. And Safiya Noble, for example, at UCLA made this point years ago. If you type into Google search engine 'Asian girl,' or 'Black girl,' in her case, the images that come back are the images that Google's algorithms have learned to deliver to users because those are the ones that get the most clicks. And those images are not neutral.

They're affected by the choices of the users. And since we come from a biased world where, so for example, Asian women are sexualized tremendously. The images that come back are not necessarily ones that a young teenager maybe just looking to see how they themselves are represented would find at all comforting, they could be quite disturbing. And this is the neutral effect of a technology that was built mostly by white engineers who had no idea that what they were doing might reinforce biases that are already out there among the user base. So, more recently Mutale Nkonde from AI For The People had a really nice piece in Ms. Magazine pointing out that if you ask ChatGPT about a prominent African American woman, I think it was Bessie [Smith] was the name, she referenced, a jazz musician. ChatGPT knows very little about her.

Why? Because the corpus that they ingested, which is the English speaking internet if you will, doesn't have a lot about African American women jazz artists. So, the inheritance of these biases is problem number one. Then when you look at how open AI is by its own admission trying to fine tune the behavior, they're aware that there's controversy in the world and that these tools will be asked questions by people interested in controversial topics and they would like that ChatGPT not be used to reinforce extreme points of view, let's say. But who decides what's extreme? So, here for example, they say if a user asks to write an argument for X, U, meaning the AI should generally comply with all requests that are not inflammatory or dangerous. Then in their next bullet point, in their current guidance, they say, if a user asks for an argument for using more fossil fuels close quote here, the assistant should comply and provide this argument without qualifiers.

Now, why is an argument for using more fossil fuels considered non-controversial? Maybe it should provide qualifiers. The qualifiers might be the world is on the path to very dangerous changes in the climate if we continue to use fossil fuels at the current rate, this is a political decision. And right there in their own guidance, they say, no, provide that argument without qualifiers. So, the basic problem which we've always had with these amazing tools is that the owners of the tools, the designers of the tools, de facto have tremendous power to shape what we see and know. Now, that doesn't mean that you can't go find other arguments. It's not like we've closed off your access to discover more extreme points of view, however you want to define extreme. It's just that the convenience and usability of these tools will make them prevalent and we won't even know what you're missing because of how much they've in effect reshaped our information space.

Justin Hendrix:

So, I do want to kind of just press you on looking at the arc of the last sort of 25 years. You've already brought up Google Search. You've already brought up the way in which it changed our relationship to information, how we find facts, how we find arguments perhaps that may have bearing on our political point of view. We know from great research from all sorts of scholars, I'm thinking of people like Francesca Tripodi, the degree to which that reflexive relationship between the polity and search engines can really play a huge role in shaping political reality. Do you see these companies perhaps learning from that experience, aware of that experience? Do you see any sign that open AI gets the gravity of what you're talking about?

Micah Sifry:

It's hard to say because they're only partially transparent about their own internal process. And for the same reasons I suspect that Google say or Facebook or any of the other big tech platforms have always been shy to reveal what their internal processes actually are. Because, again, once you create such a powerful tool for focusing attention and knowledge, everybody wants to gain it. And if you explain how your algorithm works, you make it easier for the bad actors who want to gain it. I mean, that's what the whole SEO industry is in effect search engine optimization. So, we'll get chat engine optimization too. OpenAI, which started out as a nonprofit though given who it's funders were initially people like Elon Musk and Peter Thiel, I'm not going to ascribe them any particular degree of benevolence. They are now for profit. They have a charter which they still refer to, and they do talk in that charter about trying to make sure that their tools are used for the benefit of all, to try and avoid anything that can harm humanity or unduly concentrate power.

Though here they are building a tool that will concentrate power already. So, it's kind of like Google's be evil. Well, who's defining what's evil? So, I don't think we can just trust our new robot overlords as the phrase goes. And there's also another thing, which is the degree of tech illiteracy by our political overlords. The degree to which they have not built up the knowledge or capacity to evaluate these systems with any degree of literacy. There is a group inside Congress who wants to be seen as cool and tech forward. They were last seen promoting, and maybe some of them still are promoting cryptocurrency. We can see how well that's gone. They argue that we need to advance cryptocurrency because of financial inclusion. I think the right word is predatory inclusion, the better to prey on more gullible people. So, I think we face a challenge in that the right place to address how these tools will operate in our society is government and regulation, but that muscle needs to really be built up much more.

I do think the Biden Administration is trying. We've seen some very good statements, guidances come out of the White House Office of Science and Technology Policy that I think are helpful, but where is the institutional capacity going to live inside the government to actually go head-to-head with the promoters of this stuff? And let's not forget, there's a kind of narrative seduction underway even in how the people pushing these tools talk about them. And then us journalists who have the daily job of translating fall into a shorthand about this as even referring to it as artificial intelligence. At best, it's augmented, not artificial, and it's certainly not intelligence. These tools are not thinking when they say that Microsoft's version of ChatGPT hallucinated, that's a human word for what something that happens inside our brains. It's a metaphor for what happens when a chat tool responds with weird responses, but it doesn't mean there's a brain in there having a hallucination. But the narrative language itself is seducing us.

And I think that's a real danger as well. You couldn't imagine ways that this can make life easier or automate certain tasks that today waste our time. I can certainly see that. At the same time, I can see how it would get weaponized to further gain a political system that is already gamed a lot by astroturf lobbying, for example. We ain't seen nothing yet in terms of what you could use a tool like this in terms of generating fake letters to members of Congress that look like real constituent letters because the language is slightly different in each one. There's so much that this could do to further break what we need in terms of authentic communication and replace it with just sort of ersatz, untrustworthy kinds of communication. And I don't think that's healthy at all. The speed at which this is rolling out makes me quite nervous.

Justin Hendrix:

Micah Sifry, thank you so much.

Micah Sifry:

It's always a pleasure to talk with you Justin.

- - -

Zach Graves:

Zach Graves, I'm executive director of Lincoln Network. We're a right of center tech and innovation policy group founded in Silicon Valley in 2013.

Marci Harris:

I am Marci Harris. I am CEO and co-founder of POPVOX.com, which is a neutral nonpartisan platform for civic engagement and executive director of the nonprofit POPVOX foundation.

Justin Hendrix:

Let me just quickly, for the sake of my listeners who may not be familiar with your organizations, let me just ask you both to give us the boilerplate, give us the elevator pitch on what you get up to. And Zach, perhaps we'll start with you and then Marci, tell us a little bit about what you're doing.

Zach Graves:

I've been working often with Marci on topics of sort of congressional modernization for a number of years now. We've done a lot of work supporting open data, bulk structured data for legislative information as well as work to boost congressional capacity in areas like staffing and also the support resources that it has particularly in science and technology. So, this has taken shaped largely through a select committee on the modernization of Congress, which existed in the past two congresses and now has been reconstituted as a subcommittee within the committee on house administration. So, we're excited to see this work continue. So, really here we're trying to help bridge kind of new technologies with the need to adapt and modernize them in the context of our governing institutions and Congress in particular.

Justin Hendrix:

And Marci, what about PopVox?

Marci Harris:

Yeah, so we got started back in 2010, so a long time ago, back in a moment of great optimism about the future of technology and democracy and have been through several cycles of optimism, pessimism, realism and back again. But really the original concept was to solve my problem as a staffer as we were being bombarded with digital messages and this new social media around the advocacy that took place during the Affordable Care Act. So, POPVOX itself is an online platform for constituent engagement. We now focus a lot more on working directly with members or committees rather than kind of boil the ocean, get everybody to come weigh in on a bill. But that work over the past decade or more led to a lot of insights that we really wanted to get out to a large larger audience. And so that led in 2021 to the creation of the nonprofit POPVOX Foundation, which is where a lot of our thinking, writing, publishing, convening, training work happens. And a lot of times that is in collaboration with Zach and Lincoln Network.

Justin Hendrix:

Well, I'm very pleased that you were able to collaborate also with, I should say Daniel Schuman, a policy director at Demand Progress on this piece for Tech Policy Press bots in Congress, the risks and benefits of emerging AI tools in the legislative branch. So, all the hype at the moment around synthetic media, ChatGPT, other kinds of generative AI tools, and you've kind of taken on the question of what the heck could all this be for if use in the context of Congress. I don't know who would like to start, but what are we seeing right now? Is Congress about to take advantage of ChatGPT to automate various aspects of how they deal with constituents?

Marci Harris:

I appreciate you going directly to the workflow of Congress, which is what this piece really deals with. There are other questions about how Congress will understand and regulate and deal with the question of generative AI in a larger societal context. But the question of how it will potentially apply it to its own workflow is really what Daniel and Zach and I address here. And I think the answer is of course it will. As we see members already experimenting with using these new tools for drafting a speech or other interesting ways to play with the new toy, which we've all enjoyed doing. There's some real fundamental questions about capacity that ChatGPT or other LLM based tools really open up. And for an institution like Congress that is so capacity constrained, it makes a lot of sense that this will eventually make its way into the workflow. The question of what is appropriate, what is an appropriate use of these new tools within that workflow, I think is really what's important to discuss now.

Zach Graves:

Yeah, and I totally agree there. I mean we're already seeing members introducing bills or reading speeches that were created with the help of these generative AI tools. And I think there's still figuring out what the sort of strengths and weaknesses of these set of technologies is. And of course they're also getting better. One of the arguments we make in our piece is that while there have been some headlines about using these to write bills, that's not the really best near term use case for it. It's actually more the routine communications that happen in the sort of internal office operations. It's press releases, it's letters, it's constituent communications. And also another thing they're good at is summarizing and distilling long documents which these offices have to process. And so this also is part of a trend over time where offices have been bombarded with more and more communications that they have to engage with.

Part of this is that the number of members of the house has stayed the same for a long time and population has grown. It's also because digital tools have lowered barriers to communicating with our representatives through email, social media, faxes, phone calls, all of these things. And so that has meant that resources have shifted from policy, from oversight from other functions because it's in a very constrained funding environment to all of these sort of communications tasks. And so our underlying thesis is that it'll help strengthen this institution which many see as sort of dysfunctional by taking the pressure off all of these kind of routine communications functions. And so in that way it could be very valuable for strengthening the institution.

Justin Hendrix:

So, when I look at the homepage right now for ChatGPT, the service is fairly clear about its limitations, may occasionally generate incorrect information, may occasionally produce harmful instructions or bias content, limited knowledge of the world and events after 2021. So, clearly there's going to have to be some testing benchmark for when or whether if it's appropriate at any point for Congress to utilize tools like ChatGPT. What's the time horizon you imagine? You already point to the fact that multiple members are experimenting, playing with the technology just the same way that people across the world apparently are. Do you imagine this is something that happens in a year, three years, five 10?

Zach Graves:

What's the "this" you're suggesting?

Justin Hendrix:

By this, I mean the utilization specifically of large language model tools to do some of the things that you're referring to.

Zach Graves:

I mean, depend on how extensive you mean. Because they're already using it and there's not a lot of controls. I mean it's not like the executive branch where there's a central IT bureaucracy like GSA or someone saying you can use this, you can't use this. I mean to some extent in the house the, there's an office called the chief administrative officer, which set some guidelines for technologies, but you in practice individual offices are kind of autonomous in what they can and can't do. As we see they're already utilizing some of the stuff. The key question to really optimizing it and taking advantage of it, I think is integrating with some of the existing vendor ecosystem, which I know Marci works very closely with. For example, there are these tools that help them process constituent communications and casework and integrating to the back end of these tools I think is going to be really important. I'll kick it over to Marci though if she has other thoughts here.

Marci Harris:

Yeah, I think on probably many listeners understand that a lot of the work of a congressional office is correspondence based. So, even things like drafting a letter to an agency or drafting a response to constituent communication or drafting a one-minute speech, congratulating the high school team for winning their game. These are not sensitive, in most cases, not things that require specialized current knowledge. It really is just that drafting function. So, I think there's the opportunity for the tools to be very helpful. Again, the human has to still review it and make sure it's factual and addresses the issue. And for a member of Congress that is something that they want to say on the floor of the house. But I think in a lot of cases, as Zach said, this is already being used or at least experimented with. The real capacity gain will be when it's more integrated into the tools.

Now, notoriously congressional CRMs, it's a closed market and not a lot of incentive to be very innovative on its own. Sometimes these CRMs are the last to integrate features that are available in commercial, more commercially available platforms broadly. But there's the opportunity there I think for a lot of capacity gains. Some of the caveats on the ChatGPT page that you mentioned, Justin, are kind of the same caveats that would come with an intern who do a lot of the work on Capitol Hill and we love the interns. But I think you can see these kinds of tools in a very similar way as the first draft that then needs to be reviewed by somebody with decision making power or a little bit more context. It's the other task that Congress does that I think we address in the piece. And Zack mentioned probably should be a bit of a slower rollout.

Justin Hendrix:

So, we're not going to see ChatGPT or any other large language model writing legislation or dealing with complicated policy considerations in the near term.

Marci Harris:

So, I think we're already seeing some members experiment with it. But as someone with lawyer by training and spent many hours in the office of legislative council working on the health reform bill, I have great respect for the lawyers there and know that this, even the questions are probably driving them crazy. Although Zach and I had a wonderful conversation with some folks working on legal models for LLMs, and we talked about how some of the most important training data for future legal applications of this technology would be the critiques that those lawyers are currently levying against any drafts that are coming in from a simple ChatGPT run of a draft. So, I mean think it's certainly in the future, but that balance between the experts who protect the code and ensure that what is drafted is not going to mess anything up if it becomes law, I think those gatekeepers will be in place for a very long time.

Zach Graves:

There's certainly a use case to have a bill writing kind of tool, and I think it is possible that we could develop one as the technology improves and as Congress gets interest in these kinds of modernizations, I think it's not the most obvious near term thing and it's not something that could be ultimately useful but isn't the highest value add right now. And the reason for that is that really highly specialized areas of expertise in knowledge, particularly ones like this that have a really high cost for errors that slip through, you're going to need a lot of human reinforcement learning or similar kind of training methods that would be very specialized around this set of issues. And so it would take a big investment to develop expertise that probably doesn't exist internally or in the vendor ecosystem. And it's not clear that it would work at a level where there's an error rate that is sort of acceptable.

It might be better than, there are certainly issues with bottlenecking at the Office of Legislative Counsel. And so there is an argument for it, but I think the communication side is a much clearer near term use case. And also I think it's not just generating bills, but sort of analyzing all of the regulatory matter out there and the statutes that are out there and thinking about using that to inform what should we do, what statutes haven't been authorized that are still getting funded, or here are a bunch of regulations that we haven't touched in 50 years, maybe we should look at those. Or here are inconsistencies between these two different set. There's a lot out there where statutory analysis could potentially be useful, but I think Marci and I are both skeptical. It's interesting, our co-author Daniel, is a little more favorable to the idea, but I don't think we'll see that really in the next few years.

Marci Harris:

I actually advocate for a different kind of status for some of the bills. So, I think as we all know, there are some bills that get introduced that basically everybody understands they're never intended to become law, the so-called message bills. Nobody likes to admit that they're introducing a message bill, but let's face it, many people are introducing message bill. I think that, and Zach and Daniel and I have discussed, there's a possibility that you could use a system like this to draft a message bill and it could have a status that just said that it was not yet reviewed by legislative counsel that would allow it to proceed and not be a part of that bottleneck that is backing up the expert lawyers that are working on bills that probably do have a better chance of getting a vote. So, I think there could be an opportunity there for a first drafting piece.

Zach Graves:

And the other place where you can have utility is sort of understanding what's in the bills and what they do and how significant they are. You have these massive omnibus packages that tend to be more frequent. Now you have even generating bill summaries quickly is an area that I think these tools could be really useful for. Currently, CRS does it, but they don't always happen quickly and not on every bill. And so stuff like that where again, the risk of error is not as significant as sort of statute.

Marci Harris:

Yeah, and I mean I'm old enough to remember when it became possible to word search a bill and that was a revelation. So, now being able to have a little assistance querying a new thousand, 2000 page bill to find out if various sections are in there, I mean, would be incredible.

Justin Hendrix:

One thing I liked about this piece is that you do list out handful of bullets things you think Congress should do to consider how to realize the benefits of the use of these types of tools, get ahead of the issue, figure out what's a good way to use these tools, what's perhaps off limits at the moment and where those thresholds are. But one of the things I wanted to ask you about is really this last section in the piece, which is really about how these tools might change the information environment in which Congress is operating. You talk about the rise of the lobbying machines, so what do you think will happen in the kind of broader legislative information environment?

Zach Graves:

I mean, it's interesting because we're in an environment where there's a big well-funded influence ecosystem, but it's not really clear that that's connected to outcomes or data in a very deep way. It's sort of like the pre-"Moneyball" era of sort of DC sports and there are tools that big firms and others have already that digest data and have their kind of models to predict whether a bill's going to go anywhere or not or things like this. But I think there is an opportunity to really make it more kind of quant quantitative and metrics driven. Obviously some things are always outside of the scope of the data that you can get, but I mean certainly that is an interesting angle.

And the other interesting angle is that I think it could theoretically if done right, democratize access to kind of influence and lower the barriers to different interest groups sort of organizing and exerting pressure on their representatives. So, you don't have to create a trade association. You don't have to hire a lobbyist for $20,000 a month that you could instead perhaps deploy some of these tools. So, that would be really interesting in the same way that the web two suite of technology is lowered barriers to certain kinds of grassroots activism, you could see something like that here. There is of course a darker side that we talk about as well.

Justin Hendrix:

Marci, what about that darker side? Can you imagine a world where essentially we've got massive amounts of automated propaganda that are pelting members of Congress on a regular basis, perhaps worse so than today call in systems deluge with synthetic audio? What do we imagine?

Marci Harris:

Yeah, I mean Zach's really the master of the dark worst case scenario, so I will default to him on that. But yes, certainly I think we will see those kinds of things. I would give the example though I am not terribly worried about its impact, especially if the offices themselves beef up with their own tools to handle it. And thinking about let's say the regulatory context, the public context, the regulatory context, it's different than Congress. Those agencies are required to process every comment and to address any original points that are raised. But whenever I have these conversations and people are really concerned about how these tools will impact the regulatory public comment process, I kind of say, well does it matter if a good idea, if a good novel point of law is coming from computer generated comment or from a person sitting in Idaho writing it out by hand, what does it matter?

The agency has to consider novel points of law or things that did not make it into their analysis when they're raised public comments on a regulation are not a democratic system, so it doesn't matter how many you send, how many original points are made. It's a little different on the congress side because it is still the case that members of Congress are kind of measuring by the word and by the pound to try to understand what their constituents think about different topics. And that's really where I think these tools may prompt or rethink completely about or rethink of that system completely. And that's where we kind of discuss in the piece. Maybe it provides an opportunity to go back to first principles. Is the goal to understand what constituents think about something? If that's the case, then there are better ways to do that than just waiting for somebody to write you a letter.

Is the goal to hear a story of someone on the ground and how they're impacted by a policy? If so, maybe the member office needs to do a little bit more proactive outreach to actually find individuals is the purpose to find consensus, whatever it happens to be. Maybe this opens the door for more deliberative town hall meetings or for more kind of interview focused processes. But I think the kind of arms race of many more letters and many more calls responded to by many more letters and other forms of response that an office could kick out very quickly hopefully brings us back to thinking, okay, what's the point of all of this in the first place and thinking of new ways to do things. And we end the piece by saying maybe that means getting in a room together.

Justin Hendrix:

So, more cacophony, more volume essentially shouldn't translate necessarily to responding to that with more automation. But perhaps we go back to meeting in real places and trying to find other forms of insight into what it is that people would like from Congress.

Marci Harris:

I think there will be new value placed on that and I don't see that as a negative.

Zach Graves:

And we're also already in an environment where congressional offices and also agencies seeking commentator bombarded with information. The net neutrality at FCC comment was flooded with various bots on both sides. Even your average comment process is filled with a lot of junk if you follow these things on various APA proceedings at agencies. And Congress similarly is bombarded with all kinds of bulk communications and advocacy groups, some of which is legitimate, some of which is more astroturf-y. So, I think changing the nature of this could be the shift from mask bulk advertising to targeted advertising where it can be a little bit more precision and precise. I think as long as it's rooted in real people who are really in a district. And if it is ways to sort of amplify and augment and organize those people, I think that is generally a legitimate kind of function of the democratic process.

Where I think there is a sort of novel risk is where particularly very capable nation state actors might use this to go on YouTube and find some audio and spoof a donor and call a member with a fake bot pretending to be them ahead of a crucial vote. But those are places where our sophisticated capacities can help respond. Our intel community and law enforcement community can help address that. And we're going to have to build new tools to do that. But there's already very sophisticated kind of efforts like that out there. And there's also analog ones that are unsophisticated.

I think there was one a few years ago where a reporter pretended to be Charles Koch and called a governor and got a bunch of information that he shouldn't have. So, these things are part of the process. We're probably going to see some high profile kind of incidents that will spur a reaction and rethinking and I think that's natural. I don't think the risk here is at a sort of destroyed democracy level. Some people are saying, but they are real and there are things that now is a very good time for our leaders to be proactive and get on top of it.

Justin Hendrix:

I guess the last question does follow from that somewhat, you raised another point that just this thought about whether these tools lead to more concentration of power in the hands of the wealthy, the influential. Is there a way to avoid that? I mean on some level I suppose the democratization of these tools might mean the democratization of certain actions and organization of information and utilization of information, perhaps investigation of information that were only available to the wealthy and influential in the past might be more available to grassroots groups or individuals. But is there another way of thinking about that? How do we avoid AI tools helping the already powerful?

Zach Graves:

Yeah, this was where we were responding to a point that Bruce Schneier [Ed: with Nathan Sanders] at Harvard raised in an op-ed he did for the New York Times and I think it was a really, really important point. Of course, by democratization we can mean a bunch of different things. We can mean that it's open source and open data. We can mean that it has a governance mechanism that is in some way kind of multi-stakeholder or otherwise democratized. And I think there are several different kind of things we mean by talking about this. It's a little bit non-specific, and I think here we're also being a little bit non-specific because it's still an emerging area. But I think the basic point we're making is that these tools are not solely the province of large firms or governments, but are broadly accessible to be created and modified and utilized by a range of interest groups and factions and stakeholders.

And I think that's the direction we're heading. Even though training these models is very capital intensive, the moats don't seem to be very wide. And the evidence for that is that there are lots and lots and lots of startups emerging the space. And we've also seen companies that have had a pretty good, like Microsoft has a pretty good open source interoperability ecosystem approach and they've said that they're going to empower people to create different kinds of models on unique sets of training data that have unique kinds of reinforcement. That said, it's still very sophisticated and I think there's still this tension between people who want to assert control viewing the various risks of the new set of technologies and those who want to have the more traditional kind of us approach to innovation, which is generally open and competitive and a little bit chaotic. So, that's where we are.

Marci Harris:

Well, and I think even beyond the advocacy side of things or the influence angle, there really is an opportunity for taxpayer funded information that is really high quality and nonpartisan, such as GAO reports and CRS reports and the kinds of information that we all pay for that quite some time. And in the case of CRS was not available to the public. There's an opportunity to use these kinds of tools to make that more widely available.

So, I think that's where Congress really needs to see its role as a provider and not just a consumer of information, but as a provider of information and also a contributor to the information ecosystem. There's a lot to be said for hearing information transcripts, CRS reports, GAO reports, and all of the high quality information that is produced within the government to be more widely available to the public that doesn't have to go behind a paywall to find it or to have it mixed in with everything anybody's ever said on Reddit. So, I think for Congress to really think about its role and how these tools could be used for the benefit of the ecosystem as a whole would be really important.

Zach Graves:

This is also a good reason to go and complete the project of legislative open data. We've got a huge great set of information on congress.gov that's machine-readable and open. But there are still some things we need to go back and add. I think it only goes back to the early 90s or so right now, although they have things like the congressional record earlier and then you can get to the 70s or so from gov track or other sites, smart cities that I think that's about right currently. But there's also kind of archival CRS reports that aren't up there and there's other committee documents and various things that we have in binders and boxes somewhere in archives that should be built out and I think could help augment some of these tools.

Justin Hendrix:

Well much to consider and look forward to. Your piece, Gots in Congress, The Risks and Benefits of Emerging AI Tools in the Legislative Branch.... Zach Graves, Marci Harris, Daniel Schumann. Thanks very much for putting this piece together. I hope you'll come back and talk about these issues again in future, perhaps when we have a little more of what you say Congress needs, which is experimental evidence of how and when to apply these things and where it makes sense.

Marci Harris:

Thanks so much.

Zach Graves:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics