Home

Exploring the Intersection of Information Integrity, Race, and US Elections

Justin Hendrix / Mar 10, 2024

Audio of this conversation is available via your favorite podcast service.

At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast.

So today I'm turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.

He's joined by three other experts, including:

  • Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago.
  • Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities.
  • And Kathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.

Key points of discussion included:

  • Impact of AI and technology on marginalized communities: The panelists highlighted concerns about AI and emerging technologies disproportionately harming marginalized communities. Decisions regarding the commercialization and design of these technologies often overlook their impacts on these groups. Issues such as voter disenfranchisement and the manipulation of information flows using AI were discussed as significant threats to the integrity of elections and the broader democratic process.

    "Marginalized communities bear the brunt of [technology's] potential to do harm. And those harms often come through the commercialization of the product or design decisions. And so some of the decisions that assess potential impacts on marginalized communities often go unchecked." -Brandi Collins-Dexter

  • Role of media and tech in shaping narratives: The importance of media and technology platforms in influencing public opinion and shaping political narratives was a central theme. The discussion pointed out the need for media and tech accountability, emphasizing how algorithmic biases and the commercial interests of tech companies can amplify social problems and distort the information landscape.

    "An uncritical approach to institutions and systems is even more profound when mainstream media and attempts to make sense of information ecosystems focus on how the tech platforms fuel mistrust of institutions and downplay why people are discontent with material realities." -Brandi Collins-Dexter

  • Importance of trusted messengers and local organizing: Panelists underscored the vital role of trusted messengers and grassroots organizing in combating misinformation and fostering community engagement. They argued for investing in human networks and local infrastructures that can offer reliable information and support democratic participation, especially in marginalized communities.

    "Trusted messengers are absolutely where this is, and we've seen some really significant success cases in community organizations acting as trusted messengers in communities of color. The work of the Disinformation Defense League is one that I would shout out." -Kathryn Peters

  • Policy and regulatory considerations: The conversation touched on the need for thoughtful policy interventions to address the challenges posed by AI and technology. Suggestions included enhancing the civil rights infrastructure within tech companies, addressing algorithmic bias, ensuring data privacy, and increasing accountability through measures such as whistleblower protections and private rights of action.

    "A lot of our current laws don't really...adequately deal with algorithmic bias and the pluralism issues we dealt with... Data privacy is so tied to race and manipulation here. And so having real data privacy protections that are equitable and tailored to deal with some of these things that affect communities of color is incredibly key."​ -Spencer Overton

  • Empowerment through education and awareness: There was a consensus on the necessity of educating the public about the workings and implications of AI and technology. This includes raising awareness about data privacy, understanding algorithmic bias, and empowering individuals with the knowledge to critically engage with the digital information ecosystem.

    "There is no local organizing that can really counter the fact that people have access to all of your data... There has to be data privacy laws and policies in place that protect and retroactively protect communities." -Danielle Brown

What follows is a lightly edited transcript of the discussion.

Spencer Overton:

So it's great to be here on Tech Policy Press podcast. And here's how we'll work today. Brandi and Dani will open us up with short framing remarks, and then both Katie and I will respond. We'll have a little back and forth between ourselves. So Brandi, let me turn it over to you.

Brandi Collins-Dexter:

Yeah, so I think on this question of AI technology and elections and AI in other realms, I think there's a lot of exciting possibilities. But too often what we see time and time again is that marginalized communities bear the brunt of its potential to do harm. And those harms often come through the commercialization of the product or design decisions. And so some of the decisions that assess potential impacts on marginalized communities often go unchecked. And we have a handful of silicon landlords who often own the technology and are increasingly too big to regulate. Early use cases for emerging technology are also often limited and rarely reflect the diversity of the population engaging with the tech at scale. And so part of what we see is that all of our bias, pathologies and constructions and assumptions about race, which in many ways is the most successful disinformation campaign in human history, are folded into the algorithms and exported out across the global web.

And there's not a meaningful way to do content moderation at scale. We're dealing with companies and platforms that are managing billions of users speaking over 7,000 languages in all sorts of regional distinctions, and evolving slang. And so part of what I think I'm concerned about or at least thinking about when we look at the US elections is targeted chaos. 87% of US citizens can be identified by birthday, zip code and gender. We could see phishing attacks that are tailored toward election officials, and harassment campaigns based on publicly available data. We've already seen instances of this and often who's targeted and harassed are disproportionately Black and elderly people. We could see voter disenfranchisement. The access that election officials have to sensitive voter and government data could be exploited by malware and ransomware. Biases could be baked into the way that voter data rolls are cleaned up using AI in ways that disproportionately impact marginalized groups, often women, transgender people, people who move around because of housing insecurity.

And then also I think one of the things I think about most is messaging saturation and the way that's used to exploit wedge issues. We've already seen this play out with the Joe Biden calls in January where you had targeted towards New Hampshire voters telling them to save their votes for November. In the last several years there's been multiple instances of white nationalist groups using robocalls to either activate or intimidate voters. And there's limited research on the success of direct persuasion tactics using AI-generated content. But I think the thing that we should keep in mind is that when we're talking about the matter of how information flow operates in the role of AI, we're not just talking about isolated pieces of content, but the ability to flood or clog information systems and to manipulate how we make sense of physical data.

So I want to kick it over to Dani, but the last thing I want to say is that we need media and tech accountability work and research that both measures and acknowledges how new media affordances and algorithmic design amplify pre-existing social problems and demonstrate why we cannot treat these as underlying issues of the symptoms themselves. An uncritical approach to institutions and systems is even more profound when mainstream media, in attempts to make sense of information ecosystems focus on how the tech platforms feel mistrust of institutions and downplay why people are discontent with material realities. And with that, I want to turn it over to Dani to talk more on that.

Danielle Brown:

Yeah, thanks for that. I think it's really critical to bring up the accountability work because it really must be acknowledged when we're thinking about new technological affordances because they seem like new problems, but we have a whole lot of old problems that we never fixed. And the threat, especially when it comes to minoritized and marginalized communities, is often that those old problems are amplified, and the scale of harm is intensified by what the new moral panic is. I am a researcher of media narrative, mostly in digital journalism spaces, and I'm interested in creating reparative narrative change through non-traditional mechanisms because my experience is that trainings in newsrooms and with journalism programs, it really just hasn't worked. We haven't fixed some of the old problems. I worked most obsessively in the space of exploring digital narratives when it comes to protests like Black Lives Matter. My comparative work described a world where journalists, even with their various levels of checks and balances and moderation and training and education couldn't get it right.

In fact, this democracy-serving entity that we're all fighting to save, its norms and routines and culture has been really good at degrading protests of police brutality and otherwise and sensationalizing the work of democratic actors. This is true even after the murder of George Floyd in 2020 and the racial reckoning that followed. And I say all that because when I think about information integrity and the elections and the tech concerns that are being raised now, it's really important to remember the pre-existing problems we haven't fixed. And fixing those things really isn't as sexy to talk about as AI and other issues that we can still discuss with intelligence across political aisles, but it's critical that we really do think about those core problems. So my comments are really focusing on that space and the opportunities and challenges affecting information integrity in the election in the less sexy realm.

I thought it was not that great too, but I also know part of that is because it seems like a reiterative and a fundamental narrative. So we're thinking about the opportunities and challenges and I think one major opportunity I see is that we really have a critical opportunity to invest in people. And the empowerment of people, not artificial human knowledge, but real tangible human knowledge. I think this is different from investing in tech orgs that are run by billionaires that make tech solutions for their tech problems, but really investing directly in people and researching what people need and how they network with each other, I think we have the opportunity to really look into existing networks of trust and trust infrastructures that people use every day. In my work, I've looked at those networks of trust and reliable information systems that exist in Black communities in the Midwest, and I found that trust exists in people, not institutions. And those people have to deliver the information, not apps.

Trusted messengers create critical information pathways for Black communities and especially members of those that have divested in the traditional infrastructures we often worry about, but our priorities, they often fail to center people power. I think that we have to invest in and develop trusted messengers and the infrastructures that they use to provide touch points for enhancing information and preserving some of the place where people can find information that has integrity and they can find and revive a sense of civic responsibility in communities by doing that. I think I was thinking about people who have been panicking at the thought of an AI takeover, and how it worked in with this principle. And it was funny because just today an AI expert sent a totally Googleable question to our group chat to get information about a conference deadline. And this, again, totally Googleable deadline was taken to a group of trusted messengers to inquire about additional information and workarounds from deadline extensions and ultimately that what exists on the website is what will stand for the conference planning deadlines.

And there's just no way to say that humans don't hold this kind of information capital. So I think there's a real opportunity to invest in humans, but I also think that the major threat for me feels like it comes from conversations with community members where sentiments are reflecting major voter abstention and they aren't whispers. And I think there's a connection of what's going directly back to what Brandi was talking about when you think about how AI and technology can amplify the amount of tricking people that they're able to do. But a lot of it I think is also connected with this willingness or tiredness with the way our society has worked in the past 10, 20, 30 years. Just the other day, Charlamagne tha God was on ABC doing a premier interview and they really hyped it up. It was even on my Alexa. It came on several times because my Alexa just shows the show. But Charlamagne tha God was talking about Black communities. He's super influential, especially with young Black Americans across spaces, not just in the Midwest where I study.

But he's talking about the election and says that the candidates are either uninspiring or not in line with the democratic principles. And he says that people are going to vote for the couch and he's in many ways shifting people in that way. And we don't have counter narratives to push back against abstaining from, not good effective counter narratives, not like the one he was providing. We don't have good counter narratives for abstaining to counter this notion that people should abstain and not participate in democracy. So I think that there is harm in some of the narratives that are being pushed out. In this space I think that there's also harm that are pushed out by government representatives and news organizations that represent black people, that honestly the legacy of harm can't be erased by some kind of apology that people have put out there.

And that people are just tired of investing in a failed democracy that disproportionately doesn't appreciate Black people. When you are abused over and over again, eventually you do stop engaging with your abuser. And so abstention makes sense, but I think that this is really one of the big challenges we have going forward.

Spencer Overton:

Dani, I want to take a few moments to build on your insights on supporting democratic engagement. Also, Brandi's comments about AI. So I teach in research on voting rights at GW Law School. I just finished up a law review article on overcoming racial harms to democracy from AI. From the founding of our nation, our laws heavily favored European immigration, intentionally gerrymandering white democratic majorities that persist to this day. Now, thanks to a 1965 law that repealed those earlier restrictions, there will be no majority ethnic group by 2050 in the United States. Another 1965 law, the Voting Rights Act, effectively dismantled a whole host of state and local laws that prevented people of color from voting. So since 1965, we've been tracking toward a multiracial democracy, but as our country has become more diverse, our politics have become more racially polarized. Race is the single most significant demographic factor in shaping voting patterns, more so than class, education, gender, sexual orientation.

Also among about a third of whites in the country, data show increases in cultural anxiety, status threat and anti-democratic attitudes. I provide that context to say that while I'm certainly concerned about the latest deep fake of Joe Biden, I'm really concerned that these bigger trends and how they will be affected in terms of AI. Will generative AI, for example, be used to facilitate or transition to a well-functioning multiracial democracy or be used to thwart multiracial democracy? So one concern is that intentional attempts by political operatives to entrench power through tactics like deceptive synthetic media that discourages people from voting. So basically what we saw in 2016 in terms of the Facebook pages, but basically we deep fake audio and video basically that on steroids or maybe we'll see targeted attacks on county election offices that serve large populations of color.

When we talk about natural voice processing and these technologies being available to people who don't know how to code. And now they can utilize and develop code and can deploy cyber attacks or they can create open record requests that seem like they're coming from a variety of folks and target those at counties that are majority people of color in places like Atlanta, Detroit, Charlotte, and really swing a statewide election or even possibly a presidential election as a result of it.

Law enforcement is using AI tools to analyze mobile phone data and social media to surveil protesters like Black Lives Matter protesters or Dreamers and really chill participation and speech of folks. So there are some challenges there. Another concern goes to the design of foundation models, which are optimized to training data in ways that could entrench dominant population. So since English dominates large language models, for example, AI tools for content moderation, news aggregation, mail-in ballot signature verification, translation, voice assistance in voting, they may be less effective for voters who speak Spanish, Asian and Native American languages and some English dialects spoken by many people of color, like some Black folks here.

And I'm not just talking about bias. Basically foundation models are designed to focus on averages or the most significant dominant patterns here. They're not designed to facilitate diversity and pluralism, and we want pluralism in terms of democracy. So there's a design function that Brandi flagged that's not just about, oh, we've got to improve the data or it's good data. There is a design in terms of how foundation models work in and of themselves that we got to deal with. So Katie, let me turn it over to you.

Kathryn Peters:

This is going to be fun to try to tie some of this together and add one more angle or one more consideration. I think, Spencer, you just really found the core of this panel question, which is when we're talking about these new technologies as they get deployed, will they help uphold multiracial functioning democracy or will they continue to thwart that promise as it's been pushed back in dissolution time and again? You described how these models really work for a primary use case, a normative use case, a middle. In technology and optimization we often describe all of those other cases as edge cases and you intentionally de-prioritize them, you leave them out. It's really important to get the core working really well and sometimes at the cost of not ever bothering, catching up to those. And the problem is that democracy is a collection of edge cases, all of it.

And if you're going to do it well and serve people well, you have to be ready to serve those edge cases. And I think for the reasons that you spelled out and that Brandi I think led with beautifully, at least in the short term, these new technologies, these models are going to disproportionately favor those using them for ill, they have bias baked in. There isn't accountability in place. Bad actors have more often motivation and capacity for learning them and deploying them. They are creative and will find ways to use them, but the tools themselves are currently not set to really be upholding in the way that we would want them to. I do like Dani take my hope then in the non-billionaires and the unsexy, trusted messengers are absolutely where this is. And we've seen some really significant success cases in community organizations acting as trusted messengers in communities of color. The work of the Disinformation Defense League is one that I would shout out, Dani, your own work really shows some of how these narrative changes can come to play.

And if you'd like to talk about that more, I'd love to have that come up later in our conversation. And what we need in addition to that work is to acknowledge that while all of these challenges and all of these disinformation narratives disproportionately harm communities of color, they cannot be the problem of communities of color to solve alone. And historically, we've often looked at these don't turn out Democrats, vote on Wednesdays, don't trust government, don't participate, voice dampening messages as one kind of problem. And we've engaged with those. And we've left it to the communities harmed by those to solve for themselves, which in some ways has to happen because as we talked about trusted messengers, but there's a whole other racialized disinformation narrative around elections that we don't see as racialized as often because it targets white communities. And that is this heady toxic brew of election theft combined with great replacement theory, and it's where we see that demographically, the people most likely to participate in the January 6th insurrection tended to come from counties that were seeing the greatest demographic change.

Spencer, you already mentioned like white threat I think was how you put it, this pushback to it. That's potent and that's also causing significant harm in dampening the voices of communities of color that is disproportionately targeting threats to election officials in, as you mentioned, Detroit and Atlanta and Charlotte, and there aren't the same trusted messenger networks pushing back on it yet. So the final set of pieces that I'd love to bring into the conversation are that we also need to bring this investment in people and in trusted messengers that are capable of countering these tech-empowered, maybe AI-tailored, but certainly not actually novel, not tech-generated narratives within white communities and demographically based on who's been accepting these narratives, who's been committing acts of post-election violence, I actually nominate Service clubs, the Rotarians, the Lions, the Kiwanis are demographically perfect and well-situated to get to know how elections really work, speak up on behalf of election integrity in their communities and take up that mantle as trusted messengers.

And I've been working on developing a project to have them do so. I would love to bring it back to a bigger conversation, but I think it's really important both to understand those tech trends and think about the long-term fixes, and Brandi I love some of the policy directions you're thinking about how we reclaim oversight and agency over these tools as well as these short-term ways of addressing it through this election while also them thinking about that need for permanent people infrastructure, not thinking about trusted messengers only as a stopgap but as another entire piece of our democratic life that we sometimes neglect when we start talking about the role of technology. But that will always play a really critical role and that we need to find ways for the tech to be supportive rather than replacing of those networks and of those trusted communities.

Spencer Overton:

You talked about the concept of white supremacists, mainstreaming, normalizing concepts like replacement theory and really throughout history, whether it's film in terms of a Birth of a Nation, we talk about radio and the Nazis making sure everybody had a radio and using that technology. We look at computer bulletin boards in the 80s, a lot of white supremacists have been early adopters of technology and used it to kind of normalize their ideology. Now obviously tech is used for good in terms of civil rights movement and being on the evening news. So I'm not saying that all tech is bad, but I think we've got to appreciate the connection between white supremacy, early adoption and tech. I just responded to one of Katie's points, but any responses that you all have to one another in terms of just ideas that popped in your mind as a result of hearing one another?

Brandi Collins-Dexter:

I think I have a response/question, which is why I enjoy talking to you guys. I think on that last bit, I was definitely thinking about nativism and misogyny within a lot of different communities. I feel like we always anchor this in conversations around white nationalism, which is important for married reasons, but I think even within ways that wedge issues are playing out within different communities and in how we see that manifesting in different elections is something that I don't fully know we have an answer for. But I think when I was listening to Dani and this prompt about Charlemagne, it reminded me of, I wrote my book Black Skinhead, and one of the examples that I talk about in there is this moment in early 2020 when Biden then presidential candidate goes on Charlemagne's show and Charlemagne is asking him about what he's going to do to court black voters.

And Biden makes this comment, "If you don't know whether you're going to vote for the Democratic party or me then you're not Black." And it became this kind of moment that spoke to or that opened up a lot of conversations frankly with Black voters who are 90% voting block for the Democratic party and has been with some time feeling like the Democratic party hasn't necessarily offered anything to them. And that was kind of like a really important vehicle for that. And I think the numbers look like black voters, especially younger ones, weren't necessarily going to turn up at the elections because of voter apathy and then George Floyd happens or there's a series of moments that happen that get people to break quarantine, go into the street and organize, and at the same time voter registration goes up among younger voters because it becomes clear in that moment the kind of line in the sand between what the different candidates of each party are offering.

And I think about, I remember thinking in that moment, this might be the last global mass organizing moment, but I was wrong. I feel like we've had some since then around Palestine amongst other things, but I'm thinking, I'm curious, Dani for you, what do you see as the present and future of political organizing given some of the shifts that we've seen with Twitter and now X and some of these spaces that we're so critical to a certain type of organizing maybe a decade ago, what does that look like now and what does it mean to build trust on the local level to you as we look at the tech moment that we're in?

Danielle Brown:

I see a lot of people shifting their really important conversations to private chats, to private networked areas, to the WhatsApp in some communities, but also just to a messenger group or to signal or these spaces that feel safer because they can't be infiltrated by hashtag browsing, that feel safer because some of them just technically are, but many of them what's happening in many of these groups, and I'm even seeing in my own, I mean I have my own presence of many group chats I can't keep up with is that the people, the leaders of those groups, because there are natural leaders in even a group chat bring information to different groups for us to talk about.

I am bringing you information from my group chat and that peer-to-peer technology that helps make this possible, the Zoom that's making this podcast possible, that it can't be underscored that it allows us to communicate, but the human piece of it, the peer-to-peer connection that puts us in discussion with each other that gets opinions to move from one space to another or experiences to move or one space or another, I think are going to happen through these smaller chats and they'll happen quickly because we have to stay connected to our phones and our technology for literally everything we can do.

I tried like hell to de-automate my calendar and put it on a written calendar and it turns out that's not actually something that you can do. We are attached to technology. I think it may not be as fluid, we may not have as open availability of information to everyone as a regular citizen, but I think that the information flows we have, we'll just move through these different channels in private chats that we have and that's how we'll start to organize. I think that's very much how a lot of the pro-Palestine conversations had to happen to feel safe in the United States at the ground level I think that it was how people learned how to exercise their questions in spaces that felt safe in the presence of quote unquote call out culture.

I think that it has provided opportunities for people to build a selection of trust and what I think just bringing this back to the election piece, what I think is most important that we're not thinking about is how we really resource those trusted voices and empower them to do a good job at being that, you hold this really powerful position that is neither really recognized in any formal sense but also is really powerful with your community. What will you do with it? I think that we have a really big opportunity to say to both show them what kind of narratives are available for them to choose from, but also empower people to think about other information they might provide people so that they don't have to think like them. I don't think most of the trusted messengers I talk to want people to think exactly like them and maybe just helping them understand how they can create more discourse in societies.

Kathryn Peters:

I'd love to just underline that as being such a powerful thing to say because I think a huge number of conversations are moving from bigger and more public platforms into more private ones. And so having people with the tools to speak up is both a way to promote trusted information and is one of the primary ways we'll be countering harmful information in this cycle ahead. There won't be as much ability for third parties to monitor it as past academic studies have been able to do. It's not going to be happening in places as public. And some of the more organized forms of pushback might be hurt by that. And so this notion of really empowering messengers to be able to find their own voices is just, thank you.

Brandi Collins-Dexter:

Spencer, I'd love for you to talk a little bit more even about this moment of panic around AI because I feel like I see a couple of different schools of thought, which is one, one AI's been around for a while, it's been here. It looks a lot of different ways. This is all much ado about nothing. And then there's the kind of, this looks different because of X, Y, Z. I think for me, I feel that mirrored even in discussions around surveillance and the power of technology. I think when I think about the history of marginalized groups, often our power is an ability to organize out loud.

Many of us have not had some of the privacy rights that some people have seen as an expectation, and moreover, we've always had to organize in a culture of surveillance in all of these different ways, whether it's surveillance in our churches or schools or surveillance online. So I think there's a lot of folks that are, especially people I'm talking to like my mom or whoever, what is it about this that feels like it's different or supposed to be scarier than what we deal with on a day-to-day basis? I'm curious if you feel like that or what's your take on it?

Spencer Overton:

What's interesting in terms of, on one hand, this deep fake piece that people can see is stunning to them and is amazing in this particular moment, but I really agree with you that whether or not we were talking about surveillance of protesters and use of mobile phone data and social media monitoring to do that, or I'm particularly concerned about psychometric manipulation. So if we basically say we're going to collect this data and not just send you an ad that is targeted to 10,000 people, but that's targeted to Brandi Collins-Dexter because we know you and we know what you're interested in and we know what will move you.

We've had, whether it's Indian boarding schools or making Black women straighten their hair to keep a job or making some kids speak English at lunch at school, right? Assimilation has been a big part of this country's past and its completely counter to liberal principles of pluralism and autonomy and freedom. And to a certain extent, many of these technologies accelerate this cultural conquest. So in other words, even though we're becoming increasingly people of color, et cetera, et cetera, this fortifies, calcifies the past to a certain degree and does not reflect the needs and desires of an emerging emeritus. It keeps us in the past. So I do think that to a certain extent we're automating homogenization here with many of these tools and to me that is something that is different and we've got to acknowledge it.

Kathryn Peters:

Automating homogenization. I like that a lot. That captures something really interesting. A question for the whole group. We've been talking about tech platforms and social media as one set of actors here and we've been talking about community networks and trusted messengers. It feels like there's a third pillar around traditional media as well. How do you see them playing into the potential moral panic around AI, potentially being trusted messengers or amplifying them? What set of actions or changes do you think are relevant to consider their role in all of this right now?

Danielle Brown:

Certainly there's conversations about AI in journalism and traditional media that help get some of the conversations that we have in our spaces out there. I think they've been critical for at least putting it on people's radars. But quite honestly, I think I don't have the stat right in front of me, but it's not on a lot of people's radars. I think that some of the things Brandi talked about is though people have experienced these robocalls, right? People have been experiencing them and then they have gotten much smarter. Now they're even texting now with pictures and Dani, your purse is here. And I'm like, "That's not my purse. This is so good of you."

What a tricky way to trick me into talking to you because they call you out by, they figure out your name through your public data and they give you some information that any information that will let you text back or talk back. Journalism right now is paying attention to those cases that people who have been violated by some of these passes to try to take advantage of people through these kinds of mechanisms and methods. And quite honestly, some of them seem absolutely ridiculous. People brought their tax money to the IRS in a shoebox I think was one that was on Dateline recently, and that sort of panic that they're both enabling by pushing out the narrative or pushing, it's not a narrative by pushing out these people's experiences with being tricked by this particular kind of technology, but also the lack of logic that comes with being able to avoid some of the trickery that AI is inevitably a robot.

It's a machine. It is not a human. It does not walk things back and put things forward or it can gaslight you, but not as well as my ex-boyfriends, right? The mainstream media, it's really good at giving one side and another, but not that blurry place in between. You can educate your way out of being tricked by the texting robo people that will tell you you can't vote. And again, I think that's where we need to invest is the people who are willing to educate you out of giving your person a shoebox to a person in a black car.

Brandi Collins-Dexter:

Yeah, that's funny. I'm still thinking about the ai, but I think for me in a lot of my work where I live is at the intersection between media and tech and who owns the infrastructure and the ideological and narrative and information infrastructure to tell your story and how. And so when I think about the moment we're in right now, and even thinking about Vice recently laying off 100s of workers, I think that journalists, I think in the month of January, 400 or 500 journalists were laid off in that month alone. And part of what we're seeing is that private equity and media consolidation is actually decimating a lot of media environments. And this has been true for community media for a while, particularly media that speaks to marginalized communities like black papers have been decimated for a while. I live in Baltimore and we're down to being outraged because the Baltimore Sun just got bought by Sinclair Broadcasting and the Baltimore Sun has been trash for a smooth 100 years.

That's the thing. It's like when people don't have trusted local information networks to go to, then what do they turn to and who do they trust and how do you know you can trust who you think are your people online when you bring in introduce tech and AI and all of these things? And so I think for me, I find that these conversations around media, tech, and community engagement are so radically separate in ways that doesn't make sense when we talk about how are we actually fighting for power and for the ability to move ideas and shift hearts and minds. And I think we have to be thinking about these things more in relationship.

Spencer Overton:

I'm going to say something I think that's probably going to appear at Dani or Brandi or something that they have written in the past, but just this notion of certainly local media being important, but sometimes coverage that may be skewed being parroted by large language models. So if we see for example, coverage of Black Lives Matter and as opposed to covering why all these people are out here in terms of the injustice of the police, this focus on a couple of instances of violence or confrontation with the police, basically that being on the web in terms of those kind of conflicts being what traditional media being attracted to and that being reflected in large language models and chatbots that are answering questions about Black folks. So obviously we need a vibrant ecosystem here, but there can sometimes be a feedback loop in terms of skewed reporting like the Color of Change, a report in terms of how Black folks are overrepresented in terms of poor folks or folks on government benefits, et cetera. And LLM is kind of picking that up in terms of outputs here.

I do want to flag policy here. This podcast is the Tech Policy Press Podcast. They focus on policy, and obviously we want to understand the problem. Because we can't figure out how to move forward without understanding the problem. And too often these problems are ignored or pushed to the side, but we also don't want to just be in a spot where somebody might think that these are all overwhelming, we can't do anything about them. Let's just go focus on something that we can fix. Are there some policy thoughts that any of you all have, maybe starting with Brandi or Katie here in terms of policies to address some of these issues?

Brandi Collins-Dexter:

Policy. I feel like I'm in two frames of mind about this because on one hand, I was speaking to someone the other day that offered this provocation that's sometimes because of the way that tech companies and tech move superfast, we feel pressure to pass policy that can match with that. And the reality is when you move fast on policy, putting to the side of whether even we have government that is capable of moving fast, which is a real question, does the answer not necessarily do what we need it to do to protect us down the line? And are we benefited more from taking a couple steps back on policy and not always feeling like we have to keep pace with the pressures of tech company? And I think that's an interesting provocation. And at the same time, I do think that there's a way that we can see certainly regulatory agencies step up, the Federal Election Commission, the FEC has been woefully behind on political ads and AI for a while now, and I think that there's ways that they can step up.

I saw the FCC just moved to ban AI robocalls in direct response to what happened with the Biden calls in New Hampshire, which I think is interesting. But I do wonder sometimes when I see folks in Congress debating these issues, one, do they even know what they're debating or the full scale of the issue? And then what are the sort of private interests behind the scenes that may be driving certain decision making? So for me, I think there's some merit in starting with innovative policymaking at the local level and kind of scaling that up and ensure that moves slower. But I think there's some interesting ways in which we could measure impact and what's happening in local ecosystems and think about what is the role of AI or tech regulation and even conversations around tech and media reparations that could lend themselves to more innovative visioning. But I'd love to hear what other folks think about this.

Kathryn Peters:

I would love to underline that. I think starting local or starting small scale and then moving up is a really strong direction. I think a lot of where there's policy hope might actually look in the short term, more like organizing than rule setting, finding ways to exercise community oversight of tech platforms and how they're deployed is one important thread that we've talked about here. Auditing, understanding adoption, all of those. I think when we're talking about trusted messengers, it's not going to be government policy in the sense that we think of as federal rulemaking or agencies. I do think it's going to be local and varied and it's going to come from possibly local government, possibly these other institutions.

This is where a group like the Rotarians forming a different policy or really empowering a group like the League of Women Voters, which has been engaged on these questions, what they set as their policies and norms might actually have a huge impact here in how these things play out. Rather than just thinking about government as the only actor we can work through and influence. I'm excited by having more groups and more communities understand their own agency in this and claim it and begin exercising it rather than waiting exclusively to go through government and exclusively to go through Washington.

Spencer Overton:

So I agree with you, but I think that we do, we can't take responsibility away from some of these companies to anticipate some of these racial harms, actually not putting their heads in the sand here and pretending that they're blind. There've been so many times in the past where they haven't anticipated racial harms and terrible things have happened, so they don't get a pass by missing things, they need to do that. Some of that is pre-deployment testing. Some of that is ongoing assessment in terms of what's out there. Some of that is building up a civil rights infrastructure internally and some real expertise internally. A lot of these new companies, whether it's OpenAI or Anthropi, they were founded not in a garage, but with millions of dollars. There's no reason they shouldn't have a civil rights infrastructure. And these big legacy tech companies that are getting into AI, they have a civil rights infrastructure, but it's not clear that they're being used in terms of these new lines of business.

Go deal with that Instagram and Facebook and this old stuff. We'll handle our open source AI product up here using the infrastructure that's there. There's also this question about how do we grapple with algorithmic bias? A lot of our current laws don't really, whether they have an intent requirement or they don't adequately deal with algorithmic bias and the pluralism issues we we talk about, mitigating racial disinformation and manipulation. I know there's some First Amendment issues. They're not First Amendment issues with prohibiting lies about time, place, and manner of elections. We can do that clearly here. I know watermarking isn't everything. Providence standards are not everything. They are something we could do.

I also think that, I didn't appreciate this. Y'all are much more sophisticated than I am. Data privacy is so tied to race and manipulation here. And so having real data privacy protections that are equitable and tailored to deal with some of these things that affect communities of color, I think is incredibly key. And then just finally, just some real accountability, whether it's whistleblower protections for folks who speak out some private rights of action. So let's not just rely on government, but when we do recognize a legal claim, let people be able to bring some lawsuits. There are a variety of things that are out there. Dani, let me turn it over to you.

Danielle Brown:

I love what you said and I just wanted to add that there is no local organizing that can really counter the fact that people have access to all of your and know what you're doing. There has to be data privacy laws and policies in place that protect and retroactively protect communities. And I think that when we think about things like algorithm bias, part of it is what data do they have available? And quite honestly, people aren't capable of defending themselves in lots of spaces anymore. You can't even have your passport without having a cover on it now. And just the ability to have your data extracted is not something that's taught.

It's not like part of our education system. It doesn't teach people how they're going to get an ad tailored exactly to them. And without that it's going to be really hard to educate or empower anyone to do anything if everything else that they do in their daily lives is targeted towards a different goal. So I think that those policies that are created have to quit thinking about capitalism and start thinking about the democratic thing we're trying to protect.

Spencer Overton:

All right. Dr. Brown.

Danielle Brown:

What a note to wrap with.

Spencer Overton:

Yep. I think we wrap it here in terms of Professor Brown and just thanks so much for Tech Policy Press podcast for accommodating us and allowing us to have this conversation.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics