To Mitigate Tech’s Worst Harms, Diversity is Key
Maya Kornberg, Gowri Ramachandran, Ruby Edlin / Nov 7, 2022Gowri Ramachandran serves as senior counsel, Dr. Maya Kornberg leads research, and Ruby Edlin is an advocacy campaign coordinator in the Brennan Center’s Elections & Government program.
This essay is part of a series on Race, Ethnicity, Technology and Elections supported by the Jones Family Foundation. Views expressed here are those of the authors.
In recent years big tech has played an undeniable role in facilitating the proliferation of hateful, antidemocratic rhetoric and threats to people’s safety. These problems can particularly harm traditionally marginalized groups including women and people of color. Platforms run by Meta, Google, and Twitter have hosted viciously misogynist, racist harassment of election officials, for instance. Proponents of racist vote suppression as a means to control election results have organized themselves on the platforms.
Harmful activity on these platforms is a global problem, and arguably exacerbated when a product designed in one setting is exported to a different political and cultural context. But even here in the United States, the consequences have ranged far beyond particular groups. In part due to online harassment and disinformation, veteran election administrators, more than 80 percent of whom are women, are leaving a now dangerous field in droves and taking with them years of technical competence that all voters relied on. A new vote suppression law in Texas blocked a disproportionately high share of Black voters from being counted in the 2022 primary, but it also blocked a large share of white voters.
No one claims these effects are consistent with the tech companies’ long-term goals. But they are, in part, a byproduct of their business models. Over the last two months, we spoke with five current and former insiders at these and other tech companies, from early stage startups to publicly traded corporations, to ask how the problem of discriminatory harm was able to grow in the first place. Though the universe of causes is no doubt complex, one common theme emerged: Big tech invests too little in building the diversity of leadership and research and development talent, particularly at the formative stage. This, in spite of data showing that greater gender and ethnic diversity helps companies outperform their competitors and increase growth and profit.
Our own experiment tracking election-related narratives on social media shows that incorporating diverse perspectives and competencies from a project’s inception will produce better insights and results. We hope that tech companies, with their far greater resources, will increase their investments in diversity to improve the effects of their products on society.
Competing Priorities
The tech leaders we spoke with agreed that women, Black, and Latino/a/x persons are underrepresented among employees, particularly at senior levels in engineering. The problem is acute among investors: 93% of venture capital dollars are controlled by white men.
Interviewees described how industry incentives, particularly in the early stages of companies, push diversity and inclusion considerations to the side, as investors focus on metrics of rapid growth such as revenue and users. As one former senior engineer we spoke with stated, "Even if the CEO and executive team are not bad themselves, they have the pressure from the board and/or investors. If a head of a division is somewhat problematic . . . they say’ oh, they’re good at what they do, and they’d be too hard to replace,’ so they just ignore it. And that sends a message to the company: if you can drive growth, we’ll tolerate having you break the rules."
Another interviewee who has served as chief technology officer of companies at various phases of growth remarked on the particular demands of the start-up stage, saying “We’re just focused on hiring the people to please venture capitalists’ [need for growth] and don’t even have time to think about [diversity]… My guess is all venture capitalists say they want diversity and mean it, but they care more about hitting your numbers.”
But failing to overcome these challenges has consequences.
“How to make something people want? Make something you want.”
Paul Graham, founder, Y Combinator
When it comes to choosing a product or business model and considering the social impacts of those choices, the insiders we spoke to described how the lack of diversity enables the dominant group’s perspective to take on the power of conventional wisdom. Early stage employees and board members described how they were taught one version or another of Paul Graham’s mantra, “Make something you want.”
Given the pervasiveness of this chestnut, it’s not hard to see how even well-meaning founders or early designers would leap to the optimistic vision that allowing and even encouraging users to contact others indiscriminately—without protective barriers in place—would be good for all, not potentially harmful to vulnerable groups. One early employee at a now-global social media platform said this dynamic often begins at a product’s conception: “There was a mantra that we build products for ourselves on the assumption that it then scales to everyone else.... [W]hen you only have a white male-rich team, if you build a product for yourself that is not the demographic of the rest of the world, and especially when a platform takes up so quickly, you can’t fix those issues or remodel it completely." The employee said that women get harassed on this platform “all the time” and that “no one really thought about these things earlier.”
Busy executives and staff might not feel they have time to consider the potential negative effects of the business models they are implementing. But a model that relies on increasing the time and attention users devote to the platform motivates, at least in the short term, product designs that reward the sharing of provocative content. Likewise, it disincentivizes the development of features and processes that would help users engage in self-protective measures. This drive to increase engagement in the aggregate can lead to negative experiences of harassment and abuse, not to mention long-term social impacts as a platform grows in its reach: Users who are particularly targeted for attacks, such as women and people of color, can find themselves pushed out of what has now become an important mode of discourse. And actors like local news media that might have provided some check on this behavior – for instance by refusing to print abusive letters to the editor – can be crowded out.
The Challenges in Overcoming Inequity
But putting off efforts to understand and consider diverse perspectives and interests poses its own challenges. The largest tech companies eventually acknowledged their interest in “promot[ing] a welcoming and safe environment for everyone.” We heard from one insider how considering these interests later in the life of a company, in the middle of a public relations nightmare, is particularly challenging.
They noted how public attention to these concerns may ironically limit the solutions companies fully consider, explaining that “a lot of the research around this is so sensitive, so it has to be under lock and key.” Greater attention to impacts on diverse communities at the early stage could ameliorate the public relations problems that large tech companies are facing in their later stages. In part due to these PR concerns, they noted that even initiatives focused on understanding the platform’s impacts on marginalized communities “have to happen without full transparency because we’re worried about it being leaked and then spun in the wrong way.”
Even at a very wealthy company with many employees, a small team researching social impacts lacked the easy access to numerous language and other competencies that would help it make well-informed research decisions.
The Midterm Monitor: Our Experience Prioritizing Diverse Perspectives
It doesn’t have to be this way. Our own research project, the Midterm Monitor, reveals that it is both possible and valuable to employ diverse perspectives and expertise from the start, to understand better behavior on these platforms. The Midterm Monitor, a joint project of the Brennan Center and the Alliance for Securing Democracy at the German Marshall Fund, analyzes social media posts by more than 500 individuals, including candidates for office and top pundits, and 160 national and local media entities to identify trends in election narratives including misinformation.
We knew from others’ research that platforms have been less successful at reducing misinformation that appears in Spanish than when it appears in English. We also knew that newly registered voters are most likely to be Latino, and that new voters need access to accurate election information. We therefore decided it was important to design our project from the start to capture nuances that might be particular to Spanish-language speakers and audiences on the platforms.
Spanish language experts informed the very building blocks of our project, evaluating and refining the “dictionary,” which includes 103 mis- and disinformation terms and phrases, on which our text analysis model relies. We first created the list in English, and then translated it into Spanish. These experts flagged crucial distinctions, such as the lack of an easy Spanish equivalent for the term “gerrymandering.” They also pointed out multiple options for Spanish translations for certain terms. For example, “election integrity caucus” could be “caucus,” “grupo,” or “comite de integridad electoral.”
In other cases, words had different translations in different states. For example, in the Rio Grande Valley of Texas, the term “politiqueras” refers to campaign workers. This term is unique to that region, and exemplifies the diversity of language within Spanish-speaking communities in the United States.
Our experience with the Midterm Monitor dictionary points to some of the reasons why Spanish language misinformation might be more challenging for online media platforms to track. But it also points to the potential payoff of early investment in diverse language and cultural expertise. We are a non-profit organization, and we lack venture-capital levels of dollars to invest in significant experimentation. But working as a diverse team, including colleagues with expertise in Spanish-language media, enabled us to see from the outset that we should try to capture effects on this important constituency in American democracy.
The Midterm Monitor analysis has already unearthed some initial findings that demonstrate the value of including diverse perspectives. We found that Twitter media accounts that communicate in both Spanish and English are almost twice as likely to use words associated with misinformation when communicating in Spanish than when communicating in English. We found a similar trend on Facebook, where media accounts posting in both languages were also more likely to use misinformation words in Spanish than in English. These findings provide a preliminary indication of discrepancies that could inform more effective interventions by the platforms and policy makers.
Of course, our project’s goal is not to make a profit, but rather to study and eventually mitigate digital discourse that harms democracy. But to the extent that enabling harm to democracy is hurting the platforms’ image and bottom line, our experiment underlines the potential pay off for big tech if it invests more of its own, and obviously far greater, resources in understanding and responding to diverse concerns and needs.
The Midterm Monitor unearthed the spread of dangerous myths about voting procedures in the lead up to the 2022 midterms, showing that fissures in our democracy have spread via social media. It is more urgent than ever for the tech sector to more fully reflect the American people.