Home

Unpacking the Blueprint for an AI Bill of Rights

Justin Hendrix / Oct 11, 2022

Audio of this conversation is available via your favorite podcast service.

Last week, President Joe Biden’s White House published a 73-page document produced by the Office of Science and Technology Policy titled Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.

The White House says the blueprint “among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.“ The Blueprint, then, is “a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.”

The five principles listed in the Blueprint are:

People should be protected from unsafe or ineffective automated systems.



They should not face discrimination enabled by algorithmic systems based on their race, color, ethnicity, or sex.



They should be protected from abusive data practices and unchecked use of surveillance technologies.



They should be notified when an AI system is in use and understand how it makes decisions affecting them.



They should be able to opt out of AI system use and, where appropriate, have access to a person, including when it comes to AI used in sensitive areas such as criminal justice, employment, education, and health.

To discuss the blueprint and the broader context into which it was introduced, I spoke to one expert who had a hand in writing it, and one external observer who follows these issues closely:

  • Suresh Venkatasubramanian, a professor of computer science and data science and director of the Data Science Initiative at Brown University, who recently completed a 15-month appointment as an advisor to the White House Office of Science and Technology Policy; and
  • Alex Engler, a fellow at the Brookings Institution, where he researches algorithms and policy. He wrote about the Blueprint for Lawfare last week.

What follows is a lightly edited transcript of our discussion.

Justin Hendrix:

Thank you both for joining me today. We're going to talk about this recently announced Blueprint for an AI Bill of Rights making automated systems work for the American people, a 73-page document published by President Biden's White House earlier this month.

And Suresh, I want to start with you because as I understand it, you had a 15-month appointment as an advisor to the White House Office of Science Technology Policy, where a lot of your thinking and a lot of your work essentially went into what has become this blueprint. So, can you give us a little bit of context about your role and certainly about how this blueprint came together?

Suresh Venkatasubramanian:

So, it has been 15 months. I joined OSTP in May, 2021 and left in August of this year. So, this was one of the things that people had been thinking about at OSTP and there was a whole team of us working on this for quite a while. And I think the October 8th, 2021 op-ed that Deputy Director Alondra Nelson had put out at the time articulated the vision of what we wanted to do, sort of layout something that goes beyond just high level, very high level principles, but sort of couples it with sort of specific actionable guidelines, instructions, guardrails, blueprints for how to actually achieve the protections we wanted for people in the age of algorithms. So, the op-ed, which I'm sure you've seen, sort of lays out the plan and this is the result of that plan.

Justin Hendrix:

And can you talk a little bit about that process? I understand it included listening sessions, meetings, various folks from civil society, from industry, et cetera, were brought into the discussion, some of which were moderated by you.

Suresh Venkatasubramanian:

So, there was a lot of engagement. I think I will say that at least this OSTP from the very beginning has felt that broad and deep public engagement is very important. And so, I think there's always more one can do. What we tried to model that degree of engagement, which means we had many one-on-one conversations. And I think one of the appendices lists out all the different one-on-one conversations we had with companies and other entities who were other stakeholders involved in this, including an amazing group of high school kids who are thinking about some of these issues and doing activism work around it, which was amazing. So, we had all these one-on-one engagements.

We had convenings themed around specific topics around health, civil justice, criminal justice, technology and so on. They're all available for viewing on various YouTube and other media platforms.

We put out an RFI specifically around biometrics, which was a particularly interesting and tricky topic to think about. And we had listening sessions associated with that, as well as formal written responses to the RFI that again are publicly available and can be reviewed. We commissioned an external report from the Science and Technology Policy Institute to analyze that and that report is also available.

So, all of these things basically helped inform, guide, shape from the public side what people were thinking and what people were keen to understand and do about AI regulation. And this was separate from a number of internal engagements within the US government that we did with agencies literally across the board talking with folks who've been doing AI implementations, who've been developing guidelines and regulations in their own agencies at all levels. So, that was the whole process. So, as you imagine, that process takes some time.

Because I know there was a lot of impatience which I interpret as excitement, which I was happy about to put this out. And it took time to bring it out. It took time to make sure that everyone in the US government is comfortable with what we were saying as well, that we were making sure that we understood the details and the technicalities that different agencies bring to this picture. It's complicated picture.

And I think Alex has talked about this as well. It's one thing to have a broad swats sort of set of guidelines, but if you want to make them specific and actionable, you really have to understand what's happening within each individual agency or a sector. And that has been of a US way of thinking about this.

So, that also took some time to engage and understand, and understand people's perspectives, what their concerns were, what they wanted to emphasize. And then understand all the stuff they were doing, which the fact sheet that comes along with the bill of rights of talks about. So, that's what this whole process was and that's where we ended up on October 4th.

Justin Hendrix:

I want to come back, Alex, we'll get into what we do and don't know about what federal agencies are doing with regard to AI at the moment. Because I know that's something you've looked at pretty closely. But before we do that, let's talk about what the document says and maybe at a high level kind of what the intent is here behind this bill of rights. And also, I think what its scope is and what its scope is not. So, I understand it's a document that's really targeted at the federal government itself and will get into why some find that to be problematic or a point of criticism, but what are the principles behind this thing? What was the White House trying to do by laying out this bill of rights?

Suresh Venkatasubramanian:

So, first of all, I wouldn't agree with you necessarily that the target, the document was the good federal government only. I think it was broader than that. But having said that, what are the goals? So, our goal was to articulate guardrails and protections for people living in a society that's increasingly powered by automated systems.

And I deliberately say automated systems because the effects of these systems go beyond merely what can be construed today in October, 2022 was AI, which could change by November, 2022. So, we wanted to make sure we had something that was, in some sense, I wouldn't say future proof, but definitely proof against the way in which technology can change its name and change and evolve over time.

So, we wanted to articulate protections. And it was very important to us to articulate protections and also couple that with actionable things that could be done to ensure those protections. So, the reason why it's 73 pages and not five is because we spent a lot of time trying to lay out expectations, lay out a case for why these issues are important, what the concerns are, what can be done in terms of expectations and why this is in fact already happening in so many different places. That it's not something that's pie in the sky unrealistic. It's actually very grounded in things that are actually going on. But of course, we need to do a lot more. But there are examples that we can draw on to build on to do more. That's why the document has the principles themselves and all of this.

So, what are the principles? They're actually really very simple to state. Technology should work. That's a very basic level. They should just work as a claim. They should not discriminate. They should not collect data willy-nilly for no good reason. They should be accountable and not invisible. And in all cases, there should be very reasonable backup systems, human systems because technologies fail and they will fail.

And that's really much all there is to it. These are very natural things. And in some sense, it's surprising one has to say it, but we had to say it that this is what we want. And then the rest of it is how and this actually means in great detail. And I also want to say that these ... Actually no, I'll stop there for now, but that's basically what we want and this is how we want to instantiate.

Alex Engler:

I'd love to add a little, even zoom out and do an even little broader context because I think everything that Suresh just said is true and was really necessary for an important political reason. The Trump administration spent enormous amount of time on artificial intelligence and algorithmic policy. They had two quite meaningful executive orders.

They poured money into National Science Foundation, funded AI institutes. They built and encouraged some new infrastructure and built a sort of interagency collaborative called NAIRR around research funding. They even tried to use AI to address the COVID 19 pandemic in ways that I think were quite misguided and probably should have been focused on just core data collection. But they put a big White House initiative on AI. And even across all of that, something they never did was really broadly contextualize civil liberties and algorithmic harms in the response and responsibility of government in that space. And I think I wrote at the time that that was overlooked and was important.

And so, while I think some people looked at the AI Bill of Rights and were a little exhausted by its principles focus because there is kind of this exhaustion with AI principles and AI ethics in this moment, and that's warranted because we've seen all these ethics and we haven't seen as much action as we'd like. And I understand I sympathize with that exhaustion, but the government still needed to do this.

We could have a thousand AI ethics statements. It is fundamentally different when the White House does it. And the fact that they did it so thoroughly and so carefully is really meaningful even if we're bored of AI ethics statements. And I think that's important to recognize that this really is the first time that we've seen that in such a detailed and meaningful way. And there is other things that are valuable about this. We can talk more about the specifics, but I think that broad context is worth giving credit for.

Suresh Venkatasubramanian:

Thank you, Alex. That's a very important point. And I want to add one point to what Alex said is that there is a frame of reference that I think comes through a lot of the Trump administration work that being as charitable as I can is can be roughly framed as AI is awesome, we need to do more of it. Oh, by the way, we should probably make sure to not do a couple of bad things, but wait AI great, we do more AI.

And I think there is nothing in principle and wrong with talking about the benefits and in fact there are lots of benefits that come from using technology in inappropriate ways. But you have to have the guardrails. And I think talking only on one side of the mouth and not actually recognizing the whole picture was I think part of the thing that we felt most keenly when trying to talk about this. Like, yes, but you have to have guardrails.

And I think a lot of people also felt there was a lot of internal reaction initially. Oh, why are you being so negative? Why are you dissing on tech? But I think the truth is all of us generally felt, and I'm a computer scientist, I don't want to diss on my own people, but this is an opportunity. Putting these guardrails in place doesn't mean we don't want to use tech. It's the same way we don't claim that safety belts make cars worse. They actually allow us to drive more and drive faster, frankly, in a safe way. I don't want to belabor the analogy too much, but a lot of what we're saying is, "Look, just put some safety belts and just put some checks in them. Just do the things that are reasonable to do, and then let's see what happens." And that's why as Alex said, it had to be said.

Justin Hendrix:

I want to zoom back out maybe one step further, Alex, and look at the kind of international context for this AI Bill of Rights. You've written about how the EU and the US appear to be coming into something that we might call alignment on the approach to AI. Do you think that this document contributes to that alignment? Is it getting us closer to a place where at least in the west we appear to have some sort of agreement on how to regulate or otherwise consider the use of artificial intelligence in public?

Alex Engler:

I think broadly, the European Union is concerned sort of about the broad framing that Suresh said that we're overly focused on techno solutionism and development and private sector expansion and less concerned about societal risks and harm mitigation and specifically through regulation. And so, I imagine this is somewhat alleviating to that problem that you see this as a systemic prioritization of this and a systemic approach. It is still very different than the European approach. And so, whether or not these things end up aligning in the long run is a very good question.

Most algorithmic protections that we're talking about here are in human services, not all of them, but a lot of things, like hiring and education and healthcare provisioning and financial services access. What's important about that in this circumstance is that there aren't that many international repercussions of those. A lot of those right now are somewhat localized. But as we see more and more platforms, as platforms expand in what they do, we can eventually assume that these will become more international issues. LinkedIn is a great example. LinkedIn has all sorts of algorithms that affect hiring. They operate in the US. They operate a very big company in Europe as well. And so, the laws that affect hiring or algorithms for job ads for instance would be sort of an international trade issue almost.

And so, I think if you look at our history with the challenges of data flows, which we are still working out because it's sort of an EU and US misalignments in priorities and approach, and you expect that eventually when some of these high impact algorithms and social services are more in platforms and thus become more international issues, I think you can expect this to become a trade issue and then that alignment really is going to be important. So, the fact that we're kind of working on the same issues is encouraging. Now of course the approach is still quite different.

Suresh Venkatasubramanian:

Yeah. And I should say that LinkedIn is one good example, but even HireVue is another example as well. I was interviewed recently for a BBC documentary on hiring algorithms in the UK and they talked about HireVue, which is a US company. And so, some of these companies have products that apply across the board. And this is going to be an issue with those as well.

Justin Hendrix:

Suresh, can you talk a little bit about what you learned in the kind of, I guess, sector by sector review of AI and how various agencies are employing AI and what are the kind of, I guess, different threat level or concern level that you might have in one area versus another? We've just mentioned algorithm discrimination in hiring for instance, but you also of course looked at law enforcement. Of course, there's applications of AI in literally every aspect of everything from health to education. Is there one that stood out to you as perhaps being one in area where it appears that the kind of general understanding of the role of artificial intelligence is well considered and perhaps one where you think there's the most work to be done?

Suresh Venkatasubramanian:

Yeah. So, first of all, I think working with agencies was illuminating, just because of the level of intricacy in the internal governance structures, the authorities, different agencies, different departments within different agencies have over certain parts of the pipeline. Even in something like the hiring pipeline. The way in which the Department of Labor and the EEOC handoff between each other, they have different realms of responsibility in different scopes. These are all intricacies that we have to engage with. We can't ignore this in dealing with this.

And so, to your point of where I think both the role of AI and the concerns are on are most well understood. I think hiring is definitely one of those places where the Department of Labor and OFCCP, their office of, I forget the acronym, but has put out guidelines around this. EEOC has put a guidelines around hiring practices. There's a lot of rich thinking going on about worker surveillance within the Department of Labor. So, there's a lot of understanding of what the concerns are and what needs to happen. And so, there's a lot of active work going on there. So, that's one example.

I would say that another example which is very well understood, but a lot more work needs to be done. It's frankly in the law enforcement realm and risk assessments in the use of technology and investigative tools. It's not just facial recognition. There are lots of other tools that get used, for example, sophisticated AI-based systems for analyzing mixtures of DNA and trying to tease out individual DNA cycle. The line between what was forensic science and what is AI is blurring rapidly. And the problem with a lot of the use of technology investigative components is that there isn't the same kind of governance that would happen if evidence had to be presented in court, for example. Because a lot of this stuff happens before you wouldn't get to a courtroom.

And so, there's a lot of concerns, a lot of known concerns and very little governance around this as well. So, that's I think my mind that navigating the intricacies of law enforcement and how AI gets used there is still work in progress.

Justin Hendrix:

Alex, you wrote in your piece for Lawfare about the kind of information gathering aspect of this and how different agencies have either produced information or produced less information or in some cases maybe no information about what is available. Can you maybe speak to that a little bit? What do we know about just the federal government for instance and how it employs AI?

Alex Engler:

Oh, so there's two sides to this. One is how the federal government is consolidating and bringing together its information about algorithms that it's using itself. But I'll come back to that in one second because I want to build off something that Suresh had said which is really meaningful, which is that the agency's ability to gather information about the market also differs dramatically.

So, you have some agencies that have quite a lot of authority to go to information collections, the Federal Trade Commissions, great example. They can go subpoena tons of data. I think this is actually technically an administrative subpoena. So, it's sort of a lower barrier to go out and get this information. You don't necessarily need to go to a court. And you can go ask people or organizations, companies that you have authority over to give your data. And the FTC did this for instance, for companies like Venmo that do payments, electronic payments and then they can review that data for discrimination or maybe other issues.

But the EEOC for instance, absolutely can't do that. And the fact the EEOC can't even target vendors, they can only target hirers themselves. And that's a real barrier for them. Suresh mentioned HireVue before. The EEOC can't actually in any way enforce higher discrimination law over HireVue. They can only do it over companies that use HireVue.

And so, we have a bit of a mismatch in capacity in some of these issues. And I'm sort of very hopeful that a path going forward from the AI Bill of Rights will be identifying some of those challenges and maybe elevating them to might be change in executive order or might be a change in law and that's necessary.

The other half of this is government's use of algorithms. There is a sort of algorithmic creep going on, slow expansion of algorithms into more and more decisions and slightly higher stakes decisions. That's happening slower in the federal government than it is in the private sector certainly, but it's still happening.

And you would want to see a few sets of rules and guidelines in a few different areas. One being in procurement, what is the government's plan to set guidelines for what types of software under what circumstances procuring significant algorithm decisions, decision making systems, when is that appropriate? And then on the sort of building and developing side, not only standards for actually doing the process but also documenting and making sure that we're aware of all those systems.

This is where I do think this sort of a significant shortfall of the AI Bill of Rights, there was an executive order from the late Trump administration that called for a broad registry of the current uses of AI in government. And this was supposed to be a first pass. It didn't need to be the absolute most rigorous thing ever, but ideally it should have given us and the public a clear sense of what agencies using algorithms for. And while that technically did happen, it was really quite underwhelming. And it I think was a bit of a missed opportunity for some more collection and data collection around how government is becoming more algorithmicized.

Justin Hendrix:

Suresh, I want to give you a chance to weigh in on that.

Suresh Venkatasubramanian:

No, I think that's true. A minor point, I mean, while the EEOC cannot, as you said, go after HireVue, they can only look at what companies doing the actual hiring are doing. This is where an interesting handoff comes in. So, the OFCCP, which is the Offer of Federal Contract Compliance Programs within Department of Labor, they can actually put out guidelines for any companies that gets federal contracts.

So, much like how we think of procurement as a vehicle to exert force and change on how companies conduct their business, this is also another vehicle that they can exert some control over what companies do when any companies involving federal contract has to sort of manage hiring practices in a certain way. And there are things that can be done there. So, these are subtle things there where there are levers that can be deployed, but you have to be done in a very careful way because of all the constraints on this.

But to Alex's point, I mean I think this is correct. I mean, it took a long time to get information from agencies on what they were doing regarding AI and secondly what they were doing regarding responsible AI. Because one of the Trump view was sort of ask agencies to talk about what they were doing to make sure that their systems were compliance with privacy, civil liberties and other kind of transparency accountability guidelines. And even that information has been slow to come out.

A part of it I think is because a lot of these systems get embedded at very low levels across an agency, especially for the big ones. It's often hard to know, even within their own organization and whether they're using AI or not, which is what the Trump view is. And that's why often I have a problem with of narrowing things to AI because it's easy to define it or not define it depending on how you want to reveal or not reveal what you're doing. Logistic regression could be AI if you want to sell, it could be not AI if you don't want to talk about it. And so, that kind of thing is always a problem with when you start saying, "Well, we are using AI in your systems."

But it is something that we spend some time trying to understand. The factsheet I think is a reflection of those attempts to reveal what's going on inside agencies and what they're doing about their use of algorithms. So, I do agree that it's been slow coming. I don't know how ... Alex, you mentioned this is a failure of the bill of rights. I think it's related to it, but I think it's something that I will say that is ongoing as a result of developing the bill of rights. There is ongoing work now to use this to shape and understand what's happening within the agencies.

Justin Hendrix:

Alex, I'll just reference a couple of the agencies that you called out for their prior responses having been sort of functionally useless you say. The EPA for instance, the Department of Energy, which apparently said it has no information about its use of artificial intelligence in response to that earlier query. Are there agencies that you feel have done a very good job of staying on top of this and ones that are maybe shining light as a best practice or really the ones that the rest of the government ought to look at and say, "We should do it like that?"

Alex Engler:

So, yeah, so there's a point of clarification. And Suresh and I should apologize for being so in the weeds on this that we didn't clearly delineate.

Suresh Venkatasubramanian:

I never apologized for being in the weeds, I'm sorry.

Justin Hendrix:

Fair.

Alex Engler:

There are in fact two Trump era executive orders that were a little, in my opinion, under-executed on, or I think it's Suresh's totally reasonable framing not fully executed on yet or that there are projects that are still ongoing, which I also think is encouraging. This issue isn't going anywhere as long as there is considered progress. I think that's a reasonable expectation.

But the first is about whether or not agencies document their own use of AI, which they did very, very sparsely. So, they sort of submitted, "Here's what we're using AI for with these very brief descriptions," but it was missing, did you build it or an outside contractor build it? What was the outcome variable? Who do I contact if I think this is wrong? What are the risks? Very, very little information like that.

The other, which we were just referencing a second ago, Justin, was about the regulatory authority. What types of AI systems are emerging that these agencies have authority over? And really only health and human services took that seriously. They were really the only agency that went through the process of saying, "What do we see in the market and what do we see out in the world of health science and research and products that is really using algorithms in a way that we have our eye on under our existing regulatory authority?"

And that ended up being really useful. They ended up mentioning 12 different legal statutes, several information collections, like how algorithms are changing genomic sequencing, some emerging AI use cases like in medical imagery. And by doing that, you got a holistic sense of what the agency was saying. Here's how algorithms are changing our regulatory authority.

Again, like Suresh mentioned, there's still time. This is still a thing that agencies could do. But unfortunately, I think just sort because of the timing and maybe because it was a Trump executive order, agencies didn't feel that motivated to do it. And then you had the Biden transition comes in. So, it's different people and they have their own problems and priorities. So, I don't want to present it as a failure is strong, but I think it was something that needs more work going forward and you need to convince the agencies to do it somehow.

Suresh Venkatasubramanian:

I think it's fair to say it's a failure, it's a failure. There was a schedule and they didn't deliver on schedule. And I think it's fair to say that it was failure. And I think more pressure to produce the required thing is perfectly legitimate and should be. And I think we do want to see the results of what is required by the EO.

Suresh Venkatasubramanian:

I will say one interesting thing I think comes up but I'll talk about is that it is often fiendishly difficult to get the right answers to these questions unless which questions to ask. But I think going to Alex's point, if you ask some basic questions, you'll get basic answers and it's very easy to avoid giving the answers that I needed.

So, one report that came out last year I think was this report by the GAO on AI sort of governance. It was a very good report for many reasons, but also because it had a lot of very specific questions that anyone implementing an AI system needs to ask about the system they're using, whether it's governance, whether it's testing, whether it's validation.

And it turns out just the skill of knowing which questions to ask is a skill. And if you don't ask the right questions, you will not get the answers you're looking for. And I suspect, well more than I suspect, that some of the reasons why the answers are underwhelming is also because the questions were not framed in a way that would force the right kinds of answers.

Justin Hendrix:

And just to that point, I mean something that Alex kind of brought up made me think about this, but is this slightly a language thing? Are people able to kind of, I don't know, evade the query around artificial intelligence or machine learning by, I don't know, just categorizing a system not using those terms or perhaps something that you might regard as a machine learning system that should be swept up into a consideration of AI implementation doesn't go in there somehow?

Suresh Venkatasubramanian:

Yes. Absolutely.

Alex Engler:

I agree with Suresh. The definition challenge is ongoing and one of the little tricks of the AI Bill of Rights is that it doesn't need to define AI. It basically says, "Hey, agencies with significant regulatory authority over important human decisions like we mentioned, hiring, healthcare and employee surveillance, all this, you need to figure out what's important and what's being automated in your space that kind of matters. And we don't care what methods are being used."

Alex Engler:

And so, by passing this off to the sectoral and application specific regulators, you skip the step where you need to broadly define AI in a way that's sort of universally useful. And the European Union is struggling with that mightily. They have an AI Act that is enormously long and still does not have a final definition of AI, despite the fact that they are more than a year in change into the discussion and probably within a few months of passage or three or four months passage maybe.

And so, it's a real challenge to define it concretely. I think if you're trying to skirt under the definition, you can say, oh, well, clearly people who mean AI mean neural networks. They mean things with transformers and convolution. And if you don't have that, they can't mean a decision tree. Decision tree just in if else a bunch of Excel statements, that's not important.

But actually, a lot of the really important models here, especially in finance, especially in areas like property appraisal or mortgage approval or car loan approval, a lot of these are not the absolute most cutting edge things when we think of AI. They're not the beautiful image generation of DALL-E or the language mimicking GPT-3. They can be much, much simpler machine learning models that might not feel like AI to some people, but are enormously consequential.

Suresh Venkatasubramanian:

If you don't mind me for a second going on a little soapbox, but I really want to thank Alex for bringing this up because this was a key point for us. I'll say it here. I think the EU made a big mistake trying to define AI and they're going to get themselves in knots they cannot untangle themselves from.

And I'll say this because when we did our RFI on biometrics, the same issue came up. We got a lot of pushback because we sort of looked at biometrics broadly based on the impact they were having. And there was a lot of people very upset. We have a definition of biometrics. You have not used our definition. This is not legitimate. You shouldn't be doing this, blah, blah, blah. The truth is that all these definitions get contested and legislated and argued over because the stakeholders have a stake in defining them the way they do.

Now when it comes to AI, let me sort of put my computer scientist hat on again. And in fact, I'd given a presentation on this inside the US government a while ago. There was a time when calling something AI was a bad, bad thing. No one wanted to be called doing AI. The reason, it feels like computer vision, machine learning, NLP, natural language processing are because these used to be AI and build, no, no, no, we're not AI. We're doing computer vision. We're not AI, we're doing machine learning.

It might be weird to think of that now. But there was a time, and it just goes to show that I think if you ask academics, there'll be a general agreement on what the academic field of AI is about. And I think there's a more or less consensus on that front. But when it comes to what AI is being considered in the popular space, it varies widely and wildly based on who's asking and who's telling.

And so, we were very clear from our front. We did not want to get into that game. And our view was, look, as Alex said, the point is impact. The point is who's being affected and who's hurting. That's the goal. That's what the government should care about. The government shouldn't be in the business of trying to define AI, but the government can legitimately ask who's hurting, whose rights are being affected, whose opportunities are being affected, whose access is being affected? That's a test.

And so, what we came up with was this kind of can think of like a two-part test. There's broad automation, which applies across the board, but not all of it is in scope unless there's a particular kind of impact on rights opportunities and access. So, it's not the technology that defines whether it's in scope, it's the technology in the context looking at the impact.

And that sort of two-part thing causes a lot of confusion for people are used to thinking, "Okay, well, give me a list of tech that is bad and list of tech that is good." But that is not the way to do it because it also plays into the hands of people saying, "Well, all tech is not bad. You don't know what you're talking about. Some of the tech is good." And we're like, yeah, we're not getting to that. The impact is the point.

Justin Hendrix:

Well, I appreciate that. And Alex knows my interest and the interest of many others in of course language in how language and particular terms in tech policy can sometimes take on a life of their own, even if there is ambiguity. Let me ask, I want to put to you this, when this bill of rights came out, I guess it's now been a week ago, not all the coverage was particularly positive.

I mean, WIRED's Khari Johnson faulted it for being a blueprint aimed at the federal government, but pointed out it's not binding, that leaves the large tech companies out of it. You had Protocol's Kate Kaye, who came with the headline, White House AI Bill of Rights lack specific recommendations for AI rules. Were you surprised at all about the response by some of the tech press? And, do you think, are there particular criticisms that you found to be valuable or warranted in terms of the public consumption of this and ones that perhaps you took issue with?

Suresh Venkatasubramanian:

I think Alex pointed this out on his article. This is a white paper, a document, a blueprint from OSTP. We don't make laws. We don't make regulations. In fact, we have strict rules on how we even could talk to regulators. We have to have lawyers present the whole time because we can't just go and talk to the FTC.

There are limits on what OSTP can do. And this document was never meant to be a set of regulations or a set of laws. That's what Congress would do for the regulatory agencies do. What this was always meant to be was what it was, a blueprint, a vision, an articulation vision, a set of values laid out. As Alex said, the first time it's been articulated in any form in the US government. I think it's perfectly reasonable to people to say this is not enough.

I think we would say the same thing too. This is a start. This is not the end. This is the start of a long process. But I think what we try to do in the technical companion in the section of practices is to point out all the different levers and mechanisms that already exist in small ways to try and push these things forward.

I think it's more satisfying to say, here is a law. The law does everything we're done like the AI Act. For many reasons that if we had to shoot for that process, this might never have happened. Our viewers, it's better to put this out, let a thousand sort of ideas bloom and see how this plays out. And this is a start. So, I don't blame anyone for pointing out that this is not binding and has sort of force of law. They're absolutely correct, but I think that also is a limiting view of what this can be.

Alex Engler:

If I could just add a quick couple thought. Broadly, I think the AI Bill of Rights is really a responsible and reasoned path forward. I talk in my article a lot about why you want a sectoral approach led by regulating bodies. They're working with stakeholders. I talk about housing and urban development working with the stakeholders who are asking for a review of property of appraisal and why it's important that they are working on task that they're motivated to work on.

And also, that because you don't have this top-down sort of vague definitions and rules, you're also taking the algorithmic harms in the much broader context of the problem. And so, this property appraisal project is not just about algorithms, so that is a significant part of it. And they are saying they're going to propose regulations on automated property appraisal, but it's also about the professional appraisals, the appraisers themselves, the people.

So, for instance, also that we have to look at the pipeline and the licensure of property appraiser, appraisal individuals, the professionals. And that's just the kind of thing that you really need to really move anything in these sort of big problems that involve both technology and normally human processes. Every single time I've looked at a serious algorithm harm, that has been the case. It has been a mix of an algorithm process and people and you need to address both.

And so, I do think again that broadly this is taking the right approach. And that if you look at the list of agency actions, it's pretty good. There are some notable gaps. That's the downside of going with an agency application by application specific approach. I mentioned that it's missing I think educational access to higher ed. I think insurance actually, it's not clear that there's an approach to looking at things like life insurance offerings or car insurance offerings, which are significant AI algorithm.

And then the law enforcement one, I mean, as we've mentioned briefly, is an enormous and somewhat disconcerting gap because you have to be worried about different incentives across law enforcement agencies that may necessarily don't want to do this and don't want to engage with the public on some of these issues.

So, that's my defense of it. I will say it was probably a communications error to call this an AI Bill of Rights and then have it result in a large non-binding advisory document. I do think that was maybe necessarily biting off a little more than OSTP could chew. And I'm sure people working on this maybe noted that mismatch. But I understand why some of the coverage maybe was expecting more, even though there were real limits as Suresh mentioned and what was possible, and that might be tied to what this was called.

And listen, the government doesn't have regulatory authority over when we think about big tech. It doesn't have authority over search engines and online platforms and giant recommender systems. So, if you're expecting something there, I don't know what to tell you, there's nothing to happen. That's law that's looked to privacy legislation. That's the Algorithmic Accountability Act or and had a researcher access where all these other interventions we talked about. The White House can't do anything about those other than advocate for change in on the hill.

Suresh Venkatasubramanian:

So, we may have had one or two discussions about the title, maybe one, maybe two, I don't know, just a few. I want to add one point just to use the term that Googlers all know and maybe others in tech know. We did dogfood this a little bit. In other words, when this was being drafted, it wasn't like this was being drafted by OSTP and everyone went quiet. Agencies were talking to OSTP constantly throughout this process.

And so often, we would test drive sort of ideas from this with an agency with what they were thinking of and how they're doing to see whether what was being said in here made sense. So, there was a bit of dogfooding that was going on to use the Google term, while this document was being developed as well.

So, to that end, to Alex's point, this is not a claim that this will of course work in every sector, but at least there was some initial checks, okay, these kind of communication, these kind of a guidance and advice could be helpful to an agency coming to ask for guidance on how to do this sector specific sort of work.

Justin Hendrix:

So, I don't want to end on a negative note necessarily, but I do want to maybe just pick up on something you were saying about this Bill of Rights and kind of putting it in the broader context of some of the Biden administration's announcements over the last couple of months. Of course, there were these six principles for tech policy reform that the administration put forward recently as well. And if you look at those principles, you look at this bill of rights, I mean clearly, there's a lot here that does require Congress to take up this set of problems that does require statutory regulatory reform. I don't know. I mean I suppose you all are similarly frustrated as I might sound about the kind of inability of the American system of government to produce those types of reforms.

But I don't know, what are we left with here? Good ideas, a blueprint, a way forward? One might call those best laid plans. But I don't know, what do you think it would take for us to get over the hump and to move forward in a more substantial way with all the bits of government towing in the right direction?

Suresh Venkatasubramanian:

I will say what we're left with is this, we're left with, and I will say this for the first time, an articulation of the civil rights and civil liberties and rights implications of AI and automation in general and what we should be thinking about and what is important. These are fundamentally and expression of values. These are things we think are important and they should be important.

Where we go from here, this was always intended as a start. In the sort truly true democratic sense, I would say it is up to all of us, frankly, and me in my current role as an academic and anyone else to lift up these principles and see where we can start trying to put them out there and to get our colleagues in civil society to advocate for them, to get our friends and state legislatures, to roll them into the rules they're beginning to bring out. I think that's what has to happen next.

And no, if it happens in Congress, the Algorithmic Accountability Act or something, that's great too. But I don't think we should wait for that to happen. There's no point waiting for one big thing to happen. We should just keep using this as a blueprint for putting our voluntary guidelines, regulatory guidance, guidance at the state level and then see where it goes. That's how we're going to make change bit by bit. I don't think there's one single silver bullet that's going to work.

Alex Engler:

Yeah. I have sort of two takes that I just kind of hold in my head at the same time. One is that the type of thing we're seeing through the AI Bill of Rights absolutely needs to happen. We need to take these problems seriously. We need to get regulators to adapt longstanding areas of government regulation to the emerging role of algorithms and technology companies.

And that's really more what the AI Bill of Rights is at least seems to me to be about. These areas where government has historically had a role in hiring an education, healthcare, financial services access, recently surveillance. There's a rule coming out, a new proposed rule around from the Department of Labor about how who qualifies as an employee versus a contractor. And that matters tremendously for everyone who's working under app or Instacart or any of these algorithmically managed work environments.

And that type of adaptation is enormously staggeringly important for civil rights and how our country functions and the role of government and guaranteeing those rights. And I honestly think those things get a lot less attention than some of the new technology issues, which are important and are much more likely to require congressional action and are in that kind of the second category in my head. And this is lots of internet governance and social media issues. It could be disinformation or hate speech, secure communications and child pornography. It could be surveillance and data privacy.

And it's absolutely true at the same time that while this first category of thing needs to happen and can happen, a lot of it can happen through a more incremental agency-led approach. Though again, there are limitations and flaws there. But then also for a lot of these new issues, which I think just eat up a lot of the press oxygen in the room, we probably do need legislation. And the data protection and American Data Protection and Privacy Act could pass. It's not a foregone conclusion that it's not going to, which would be a welcome and a big change if it does pass before the end of the year. So, there is some hope and some chance for progress on some of that other category of issues, but it's worth thinking about them separately.

Suresh Venkatasubramanian:

I should say the other part of my portfolio when I wasn't working on this was dealing with misinformation. And that's a way harder problem, much harder as I would say.

Alex Engler:

And there's little to build on. We don't have a misinformation department and the government's not necessarily well set up to deal with these overlaps explicitly with speech. And so, I don't want to diminish those problems. But I think if you focus on that, and these are also the problems more often associated with "big tech," using air quotes, might be disappointed by this, but everything we saw in the air battle grid still in incredibly important and also more in line with the traditional world of government too.

Justin Hendrix:

Well, and I hear that setting up a misinformation department might be a controversial notion.

Suresh Venkatasubramanian:

You think?

Justin Hendrix:

But putting that to aside, I just want to ask you Suresh about what you'll do now? You've had this experience in the White House and now back into academia, how will your research agenda be changed perhaps by your experience and what's next for you?

Suresh Venkatasubramanian:

Oh, it's changed completely. So, the thing I'm doing at Brown now and I was going to do when I moved to Brown last year, is set up a new center for tech responsibility. This has always been something I've wanted to do, but has become even more important after I've seen over the past 15 months, the ways in which technology gets developed and then translated and then moved into policy and communicated to policy circles.

I think we need to build better tech. As a computer scientist, I feel like we are stuck in frames of reference where the questions we ask and the problems we solve are still the same old, same old, but we can do better than that. We can just ask questions a different way and just expand our vision for how we do computer science. We need to educate our students to be able to think through the broader societal issues and ask those questions themselves. Because it's not just me or a bunch of professors, it's a whole army of students going out in the world and being that change.

And we need to find better ways to communicate the challenges of tech and policy. I've always said that people think of technologists on one side and policymakers on the other side. You need people who can speak tech and understand its connections to policy because tech is changing the world and technology is changing the world in ways in which frankly, our policy apparatus and our legal apparatus is not equipped to deal with. And we need ways to translate that. So, these are things I'm going to be doing at the center and that's my mission now. Just seeing what I've seen so far, I feel like making the AI Bill of Rights a reality is really what I want to do.

Justin Hendrix:

Well, I know Alex is one of those people who tries to see those connections. I do my best as well in my teaching and writing to the extent that I can. And so, I suppose will have to come back together at some point and talk about whether we're having any success. But I thank both of you for joining me today.

Suresh Venkatasubramanian:

Thank you.

Alex Engler:

Anytime. Thanks, Justin. I'll just extend my thanks to Suresh and everyone at OSTP, who did all this work. They deserve credit for how much they did accomplish.

Suresh Venkatasubramanian:

Thanks. There's a lot of people doing a lot of hard work and people are still there. I left, but they're still there doing the work.

Justin Hendrix:

Thank you both.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics