Home

Donate

Reading the "Shadow Report" on US AI Policy

Justin Hendrix / May 26, 2024

Audio of this conversation is available via your favorite podcast service.

As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced, a group of nonprofit and academic organizations put out what they call a "shadow report" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, Justin Hendrix spoke to some of the report's authors, including:

  • Sarah West, co-executive director of the AI Now Institute
  • Nasser Eledroos, policy lead on technology at Color of Change
  • Paramita Shah, executive director of Just Futures Law
  • Cynthia Conti-Cook, director of research and policy at the Surveillance Resistance Lab

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Last week, Senator Chuck Schumer released the much-awaited Senate AI Roadmap on AI. This is the result of two briefings for senators, a classified briefing, nine AI insight forums, and I assume some work by staffers. You all were not impressed. Sarah, what did you do next?

Sarah West:

We were definitely, I think, not impressed with where the roadmap landed, nor with the entire process that led up to it, which hinged around the hosting of a set of closed-door forums at which industry players had a really dominant presence. And I think that industry-driven focus is really reflected in the roadmap, as well as the lack of legislative movement that accompanied it. So, what did we do next? We turned to a whole lot of other organizations. We were also chatting with each other about our disappointment with the process, and particularly given the significant body of work that existed before the forums even kicked off. And so, we decided to pull it all together into a shadow report to offer up to Congress, is, "This is the evidence base that you could have run with from day one."

Nasser Eledroos:

I think the first thing to note, importantly, jumping off of what Sarah just mentioned, is that a lot of us, we were very fortunate, Color of Change was very fortunate to be in the room in one of these insight forums, but a lot of the organizations who drafted this, and signed onto this shadow report actually weren't even included to begin with. There were a chorus of voices that represent a diverse set of people across the country, who have been impacted directly, or indirectly by AI technologies, who just weren't even given the opportunity to have their voice heard in the Senate chambers.

Understanding that while these meetings, these nine forums are going on really brought us to an understanding, okay, when this report comes out, it's not going to be enough. It is going to unfortunately be industry-focused, and we have to be ready to counter that with the body of evidence, which we have laid out very clearly in the report, and a very clear narrative that shows that human rights, and racial justice, and the climate are all being put by the wayside, while we are seeing an increase in funding, and in fact, a directive for the companies that have met with Senate staffers and Senate leaders to engage even further in the development, and the proliferation of AI tools.

Justin Hendrix:

We're talking about billions of dollars, at least supported by this document, if not directed. Certainly there would have to be some legislative act for a lot of that to be, I suppose, spent, ultimately, but at least a sort of strong indication that industry, defense, security interests are sort of prized. Talk to me about how this shadow report came together for a moment. Cynthia, I'll come to you on that. I understand you all sprung into action last week, and this remarkably clean PDF that I'm looking at came together in a hurry, on a Google Doc?

Cynthia Conti-Cook:

That's right. We used the time that we had to come up with something as powerful as we could. I'll say that in addition to the document that many corporate dollars bought, and the corporate theater that they bought, they also bought a crucial period of time in which Congress could have acted and that we should think of the purchase that was made by the private sector here, not solely in terms of the regulatory favoritism that they were able to achieve, but also just the, in lack of a better phrase, time suck that they were able to create, and really undermine the value of this precious amount of time that we have to really pass effective legislation, as Sarah said, based on the mountains of evidence that we already have around the things that need to be prioritized, and protected.

And in addition to racial justice, and climate and the things that Nasser mentioned, I am going to throw into there democracy, and really this entire process was a portrait of the ways that corporate capture, which is something they explicitly speak about, capture strategy amongst themselves, and in their industries to describe their targeting of the public sector, and how to achieve a carpeting of corporate tools across public sector systems.

And so, I just wanted to add to that, that they were able to purchase a great deal of delay with this report from Schumer and the Senate. And what we did was respond to it with really an accumulation of conversations that we've all already been having, some of us for as many as 10 years, really, about what kind of regulation, and legislation, and really bright lines we need to be able to ensure that the amount of money that's coming into the public sector doesn't really predate on members of the public in the ways that we've already documented it has. And the other aspect of that timeline that's really important, as the report shows, is that many other countries have acted, and that the US is really in the hot seat. And what did it do with that hot seat, but just to wait some more?

Justin Hendrix:

From the outset, Senator Schumer said that the North Star for this entire exercise was innovation, so everything was going to be more or less towards that goal. If you had to set a North Star, if you could have gone back and convinced Senator Schumer perhaps to frame things differently, is there a word, or a phrase that comes to mind?

Paromita Shah:

I wish that he had fore-fronted civil rights, and accountability in the report. There was just a complete lack of awareness about how powerful technologies like AI, and machine learning impact, and will essentially automate some of the most significant decisions in a person's life. And what I really was hoping to see from this report was an acknowledgement that was going to happen that the senators understood that, instead of, I think trying to appease corporate concerns about losing market shares in government. Instead, he really just pushed forward on an agenda to automate these decisions. And what I'm thinking about is the area that I work in is in immigration, and DHS is already an early adopter of these technologies. The Department of Homeland security has been under current law, and under the regulations, is going to be leading a lot of the implementation, and that impacts, at one point or another, 46 million non-citizens in the United States.

And so, the scope and scale of the types of technologies that will be deployed on communities is really significant, and the inability to understand why privacy matters, why civil rights matters, was something that really popped out in this report, and it showed how little actually they know what the impact will be, because these decisions soon will be part of whether DHS will deport, or detain, or separate families, whether they're going to naturalize someone, whether they're going to protect someone from persecution or torture. Those are life-changing, life-altering decisions, and I was honestly a little shocked that I didn't see more of that kind of acknowledgement that those things matter.

Justin Hendrix:

One of the only mentions I believe of immigration, at least in the part of the report that encourages different consideration of legislation funding, is on the idea of encouraging high-skilled STEM workers. It doesn't seem to be much of an interest in the topic of immigration in the broader report itself.

Paromita Shah:

Yeah, that's absolutely right. And it was also in the executive order, it was a little bit framed that way, also in these new civil rights regulations, there's definitely an acknowledgement that innovation matters, and I'm not trying to get in the way of innovation. What I'm trying to do is to acknowledge, and set up essential safeguards, and it feels really odd that the word public interest was lost in this report, that the contracts, the social contracts that we have with impacted communities, and society in general was lost in favor of corporate interest. And he could have done so much more with this. I think he could have really aligned himself with the right values, along with good innovation, but he chose to go in a different way. It was just truly disappointing for me, who's seen actually DHS use very powerful technologies over the last 20 years, continue to allow this kind of exploitation of data, exploitation of communities in this way.

Justin Hendrix:

Sarah, I want to come to you. I sat on a panel with Amba Kak last week, talking about the roadmap, who pointed out that one of the main things that appeared to be missing conceptually from the Senate's report, which you cover in your shadow report, is the issue of competition, concentration of power. And I suppose that might be one of the most glaring areas where it seems like perhaps industry had its way, in terms of being certain that the scale and concentration issues were not addressed?

Sarah West:

Yeah, I mean it's particularly notable given that so much of the thrust of the roadmap focuses on funding the industry. The existence of a very heavy level of concentration in the AI market is important because where are those dollars ultimately going to flow? Under the current conditions, very likely it's going to ultimately benefit the same very small handful of big tech firms that already dominate in AI, and that's leading to all kinds of other harms to the public, which we cover in other sections of the shadow report. Absent any real recognition of competition as a significant area of concern, it does very little to perturb that status quo.

Cynthia Conti-Cook:

I just wanted to add that another glaring, gaping hole from the Senate report is that they also fail to cover, beyond elections and deepfakes, how AI technology and how the industry itself has really started to fundamentally change the public sector. And what I mean by that is access to courts, and the increase of forced arbitration, the limitations, and the corporate values of secrecy pushing up against public interest values of transparency, being able to access information. I don't know if you all have followed the New York City My City chatbot in recent weeks, but there was a report about how it was producing information that suggested, for example, that bosses can steal their workers' tips. And when you look at the concerns for what that kind of training data was that led to that kind of chatbot producing that wrong kind of information, you might want to go to New York City's algorithmic reporting, our annual report 2023, and see what kind of training data the My City chatbot was used. But what does it say? It says that training data is proprietary.

And so we see exactly in that kind of example how industry values of secrecy, and control of the information, and lack of transparency are coming right up against public interest, democratic values that we need in order to make sure that these systems aren't harming us. Not to mention the capture strategies targeting public procurement systems, and really drastically trying to reform and deregulate the types of safeguards that procurement systems have previously held in order to make sure that, I don't know, a chatbot that is going to completely erode public trust in government communications doesn't get deployed without getting properly vetted.

Justin Hendrix:

I almost read a kind of in the focus on elections and mis and disinformation in that context, to the exclusion of other contexts, as you mentioned, a kind of self-interest above all else among the legislators. "We don't want to be hurt by this stuff, but we're not so keen to necessarily act in other contexts."

Paromita Shah:

We all know what Citizens United has done to our elections, and the role of corporate money, and billionaires into the election. And it's also had an influence into what the federal government is buying, is purchasing, to provide essential services to our communities. And I guess, I'm not sure if this is getting to your question, but I think that when you prioritize innovation before you look at the record of the agencies to do the public service, to provide the essential services that you need to provide, it's a problem. Many of these companies that will be contracted, and I'll just speak from my experience with DHS, are some bad actors, Clearview AI, like a data broker like LexisNexis. And many of these corporations just rely on a huge amount of data that has been collected by these agencies and by other commercial sources.

I guess what I'm most concerned about is the idea of surveillance being operationalized into policing, and into deportation is a serious question. Policing and surveillance is something that we should be more concerned about than I think anybody really has been letting on. In this whole debate about AI, we've been stuck in a conversation about existential threats. Are we going to have AI turn into the Terminator? But we really haven't dealt with questions about procurement, about oversight, about accountability, questions that we can answer now, hearings that we can have now, before we decide to pour billions into innovation, assuming that what they're going to do for our agencies is make them more effective, and for some reason, make them more intelligent. That seems really at odds with what we should expect.

At this point, AI and data in our view, is almost like a public resource. It is almost in every facet of our lives and instead of looking at it as a key tool for consumers and businesses, it really needs to be viewed in terms of what's going to work for the public good. And I think that we need to really shift the conversation away from existential threats, from fears of geopolitical wars, and really get into essential questions about, what does the United States need, and what do our communities need?

Nasser Eledroos:

I don't want to overstate the companies like Palantir, like Clearview, these harmful companies, but to me the real harm is from the companies that aren't engaging so overtly in that type of conduct, but are nonetheless still providing systems and tools to enable it. I've spent three years working in a prosecutor's office before, and during the time I was there, the office in question was using IBM tools in particular. So, IBM has a program called CopLink, which is a somewhat slightly, I'd say late-2000s, early-2010s facial recognition software tool. And I want to flag that it was just in August that the case of the woman who filed suit against the Detroit police for the false recognition that led to her eight-month arrest, eight-month detention and subsequent release after a lack of evidence happened. And that happened based on systems and tools that we are seeing deployed everywhere, all across the country at the state and local level.

And so, where it has been encouraging to see states step in with AI legislations, where the federal government has been absent, but it really, I think speaks to a real harm that we are facing, where this is the norm, and this really should not be. On top of that, I've been floating this thought that leader Schumer, and the authors of this report seem to be engaging some sort of balancing act that is sidelining civil rights and racial justice. I'm not a trade policy person, but there's a lot of U.S. versus China happening at the moment, which is overshadowing sort of protection of our fundamental civil rights. And unfortunately, we have been engaging in practices in the United States of bringing technological tools into basic government functions, and under the guise of trade secrets, not allowing access to how we navigate bureaucratic systems. And so, that which has already been biased, different decision-making systems that might be biased in the criminal justice system, and the social welfare system, are just getting exacerbated exponentially. So I don't really see a way for us to solve this without acknowledging that truth, and confronting it head on.

Sarah West:

And there's such a disjunct between what Nasser and Paromita are describing and the language that's in the report, which is presenting AI as a very highly speculative technology. One where we don't yet know what the impacts are going to look like. And I think one of the things that's front and center in the shadow report is in fact, we know an awful lot about how AI is already in use, and already impacting people, and it's not time for another framework. It's time for much more meaningful action that is legally enforceable.

Justin Hendrix:

It did strike me that in your shadow report, you're a little more aligned with the Senate when it comes to industrial policy. Do you think that's a kind of correct characterization? You want public funds perhaps to be under a more clear mandate, but it strikes me that your language isn't terribly far off from where the Senate ended up when it comes to CHIPS Act, the National AI Research Resource (NAIRR), and other kind of related industrial policy.

Nasser Eledroos:

I would say from what I understand, we are, there's a lot more alignment there, but it's what is absent from the Senate's report that is really just the big herring here. I would say that I certainly feel that we should have all of these things. It's not a tit-for-tat. It's not an either or. It's not a binary, that racial justice and innovation can occur concurrently. And the fact that the report, the Senate report and even just leader Schumer just yesterday, I believe, convened the committee chairs to basically urge them to advance AI bills.

Why couldn't that have happened seven, eight months ago? Why couldn't that have happened after the first insight hearing? It really feels to me that the process that this whole series of forums has been, has sucked the air out of our ability to just own, and work the conversation of advancing meaningful legislation. We are still yet to see proper bills advanced by the leader's office. And it's really unfortunate, because we're now almost halfway through this calendar year, and we're entering the political cycle. It's just who knows before we'll really begin to see more action. And unfortunately, I think the harms will continue, not just in the criminal justice system, but in all other domains that we've written about.

Justin Hendrix:

And Sarah, maybe I'll come back to you on this question around industrial policy in particular, because I know you've written about it. It's almost like you hear the right words being spoken, but still it's not coherent enough?

Sarah West:

Yeah. So, we published at AI Now, a report called AI Nationalisms earlier this year that looks at this reinvigoration of industrial policy across a number of different jurisdictions, including in the United States. And there's this new wave of efforts to make public investments into the AI sector under the premise that this is going to somehow democratize what's already an unaccountable and highly concentrated industry. So at a AI Now, we've been quite skeptical of this notion, and really I think in particular, pushed for the need to start, not from how do we democratize AI, but instead, have a very crisp vision of why AI in the first place, and is AI in fact the right tool for this task.

And I think we have lots of evidence that the way that AI is already being deployed primarily serves two goals. One is to ramp up the ability to surveil populations, whether that's employers wanting to have analytics on their workers that allow them to speed up the rates at which they're expected to work, or pay them less. And the other use case is essentially for automating austerity politics, and reinforcing inequality. And I don't think that either of those public policy goals really serve the public interest. And so if that's the case, why sink billions of dollars into this industry? I think we need a much more compelling vision for the sector that puts the interest of the public at the center, and not the interest of the tech industry.

Justin Hendrix:

We've already talked a little bit about surveillance, privacy, some of those types of questions, but because I know, Cynthia, Paromita, we have people on this call who have worked very closely on those things. Let's say you were elected to the Senate tomorrow, and put on a committee of jurisdiction. What would be the first thing you'd want to interrogate about U.S. policy around artificial intelligence when it comes to questions of privacy and surveillance? And it strikes me that it's almost like a catch-22. Most AI technologies are somehow fundamentally surveillance technologies. It becomes very difficult to disentangle how to use the technology at all without increasing generally the amount of surveillance that's going on.

Cynthia Conti-Cook:

I guess I would focus on what we already know, in addition to the immigration system that has already, as Paromita said, been an early adapter, the criminal legal system has already been using a lot of these technologies for quite some time, and there are problems with them, and we can look to, and map what we can project about future problems under the problems that we've already known. For example, DNA probabilistic genotyping software has been in use in the criminal legal system for about a decade, more than a decade already, and we already know that there are fundamental problems with how those systems are procured, with how those systems are tested, how they're used in the courtroom, and with the companies coming in, and claiming trade secrets to not let defense experts actually understand how a system came to a conclusion that the courts are trying to use as evidence against someone in order send them to court.

Years ago, I did an amicus brief in a case where the company was literally putting the years that a man was facing in prison against its three years of R&D and investment, and trying to argue that its R&D and investment, and meant that its trade secrets were more valuable than the search for truth about whether the conclusion that system made about his presence at a scene was accurate or not. And so, we can already map from the problems that have been very well documented in the criminal legal system, and the immigration systems in particular, and understand the types of problems we can foresee, and we are also already seeing these problems persist in benefits cases and we're seeing them persist in all sorts of places where the public good, the digital public good has really been designed by police, and the private sector predominantly, and where the public interest in having a social safety net that doesn't come with a surveillance tax is really cornered, and minimized in the conversation, if not completely dismissed.

Paromita Shah:

If I could go and be on that committee, I would be asking questions about, what are the terms of suspension and termination of AI technologies, that we know are biased, and are not acting in the way that they are intended? When are we going to decide, truly, that they're actually needed to enhance, or provide better services for our communities? I'd be asking questions about monitoring, I'd be asking questions about transparency. How do we know that the contracts that are being made by these companies with our federal agencies are actually transparent? So many of them, in our experience, are redacted for trade secret reasons. Why can't I look at the contract and understand what our governments are buying from these corporations, to understand why they're needed?

And I'd like to really understand why, and add in more voices to this conversation that Senator Schumer has been having, which is a conversation about insights that included very few civil rights activists, very few impacted people. And I would want to, I think, expand that conversation to bring in more voices to actually have a public conversation that isn't set at the level of a D.C. insider game.

And so I think that's where I feel like I would be putting my energies into that, because I think you're right, AI is a surveillance technology, but I think what we're forgetting is that the companies have been surveilling us through commercial products for a very long time, and through this kind of surveillance, we really just need to understand what's under the hood, and then we need to understand what is a true accountability measure that will bring them to task, instead of giving them a longer leash. And I think suspension and termination of these technologies is truly a good one. If there is a record of violations, why can't we actually ask why they shouldn't be suspended? When it comes to products like Palantir, like Clearview AI, where I think the worldwide condemnation of some of their products is real, where fines have been levied against these corporations, I don't understand why we would contract them. And so those are the kinds of questions I would want to raise, if I was in that position.

Justin Hendrix:

One thing that is addressed in the Senate roadmap that you spend a little less time on, and I think lump in a bit more with your focus on competition is intellectual property, copyright, some of the kind of questions around those issues. Should I read your handling on those things to be a little bit more assent to the Senate's concern about those things? It does seem to me to be a good thing for senate committees to be looking at very closely. A lot of the key questions are bound up in those problems.

Nasser Eledroos:

So for example, just recently OpenAI released this statement saying that they need to talk about data training principles, and to announce a program to let creators opt out, which I think speaks... If I were on a committee like that, I would begin to ask more stringently what sort of transparency principles are AI companies going to be designing into their actual models and systems? We highlight the banning of ChatGPT in Italy for a reason to begin with. And that, if I remember correctly, was in no small part due to how creators has had their information used in the system without really knowing where it was coming from. There really should be a way for all people to have their works opt in rather than opt out.

And so, a lot of what I'm thinking about with this stuff is how do creators get rewarded? How do creators understand what's going on with content in the large language models, and in a way that's accountable to them getting compensated for this use of it? Because we're seeing these models being built in a way, just the other day I saw someone selling Claude Monet's paintings, Redux, and you look at it in detail and it's, "Oh, that's actually AI." But if you were to just see it in passing, you would not have known that. Think of that, but times a billion, with all of the creators, particularly Black and Brown content creators on the internet, and you're finding yourself in quite a travesty.

Sarah West:

I think one thing that we did say in the report, as well as that this list of issues should really be seen as the floor, not the ceiling. It's in a very short period of time, what we were able to compile together, when it's already a very long list, but it's by no means the end point, and we would certainly hope that there's scope to continue building upon it.

Now, in terms of what I'm looking for, in terms of where policymaking is going, we've essentially run out the clock here on legislative movement, and that's I think one of the real detriments of this process, is that it stalled, any movement in a year when there was actually momentum for action. That said, I think that there are other fronts that we can look to. I think that labor organizing has really formed in many respects the front lines of policymaking. Community organizing is the front lines of policymaking. And so, I think other mechanisms for being able to push forward on seeking accountability are, have been, and remain really critical to AI governance. It's just a shame that Congress has run out the clock In this session.

Cynthia Conti-Cook:

A few years ago, I represented a man who was in prison, and he, along with a few others, had realized that the risk assessment tool that was created by... It was the Compass tool, that they realized that there was one question that seemed to be determinative of whether or not they were given a risk score that recommended them for release or not. And one of these gentlemen wanted to ask to do a freedom of information request from the prison system specifically, directly, wanted to access the training manual to understand how the counselors were supposed to answer this question, because it was rather nebulously worded. It was asking something like whether this person has notable disciplinary history.

And many of the men that I was speaking to at that time had been in prison for decades, had been in prison since they were very young, had maybe five to seven years of some disciplinary activity, and then in their 40s, and 50s, and 60s, were obviously not that... In their older age, they were slowing down in many ways, and were also just maturing, and changing. They wanted to understand how their counselors were coming to different answers. And so, they tried to do this freedom of information request, and what the counselor, and what the prison responded with was that it was a trade secret.

And so when we're talking about intellectual property, there's the concerns about protecting artists, and there's the creativity, and all of that, but there's also just a fundamental way in which the states, which are already strongly... The states which are strongly incentivized to be secretive about how they go about decision-making, how trade secrets and the buffer of privatization really lends itself to yet another layer of secrecy that especially when you're operating in those systems like criminal legal systems, they really strongly operate to continue to support the state's ability to be very secretive about how it makes decisions. And in the end, we were able to get the manual. The answer was not that elusive. It was still a pretty nebulous concept that they were trying to get an answer to, but it was a really interesting process in terms of just how fundamental the introduction of so many private vendors into the public sector is changing the way that we can expect information from government.

Justin Hendrix:

Have you had any response, either from senators who produced the roadmap to which you replied, or other responses? I've seen great press coverage, of course.

Sarah West:

So we haven't had a response from the senators. I don't know that we were necessarily expecting that, but I think one of the places that was most heartening is seeing the response from other organizations, and particularly when we opened this up for sign-ons, to see the very wide range of types of organizations who signed on, and are continuing to carry the work forward.

Justin Hendrix:

So, if perhaps Senator Schumer, and the Gang of Four, who put together the Senate's roadmap hoped to galvanize activity around AI, perhaps they have done so, perhaps just not in the way that they intended to do. I appreciate all the effort that you all put into this in very short time, and I appreciate you talking to me about it today.

Nasser Eledroos:

Thanks for having us.

Cynthia Conti-Cook:

Thank you.

Paromita Shah:

Thanks for having us.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics