Home

Interrogating Mainstream Reporting on Artificial Intelligence

Hanna Barakat / Aug 12, 2024

Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

Various academics, industry leaders, institutes, companies, and civil society projects outline how mainstream news perpetuates reductive narratives about artificial intelligence (AI). Headlines that suggest “AI will replace creative talent,” “AI will cause mass job loss,” “AI is capable of thinking for itself,” and “AI will diagnose diseases” give rise to an array of polarizing and reductive narratives that simultaneously overhype the advantages of AI technologies and dramatize the risks, while obscuring the myriad harms and risks present in current AI technologies. Narratives are crafted through a range of mechanisms, one of which is the explicit choice of which voices are included in citations.

Citations are not neutral. Who is quoted, cited, and featured in an article can reinforce existing hierarchies by amplifying some voices while marginalizing others. In the context of AI, a lack of diverse perspectives in sources, references, and features creates a feedback loop, giving rise to dominant news narratives that directly inform social understanding of what AI is, how it operates, and who it will operate in favor of. This can have a direct effect on public perception, policymakers’ focus, and the options for policy.

Case study: The New York Times

As large commercial technology organizations like Microsoft, Google, and OpenAI consolidate the resources and talent necessary to develop and maintain generative AI systems, the New Protagonist Network (NPN) is exploring how the industry's influence extends to mainstream media reporting on AI.

Focusing on the New York Times (NYT) as a case study, we investigated which voices are cited in its reporting on AI and, more specifically, whose voices are left out, misconstrued, and/or overlooked. We conducted a mixed-methods content analysis on a sample of articles from January through March 2024 to analyze (1) the breakdown of all the people mentioned and quoted throughout the articles and (2) how non-industry voices were framed in the discourse.

We find that the NYT’s reporting is disproportionately influenced by the perspectives of individuals within the commercial technology industry. The breakdown of individuals mentioned and quoted focuses on individuals working at commercial technology organizations. 67% of those quoted work in the commercial tech industry. The remaining 33% of sources come from all other sectors (government, academia, civil society, etc.). 61% of the people mentioned work in the commercial tech industry, while only 3.5%, respectively, were from civil society organizations. Beyond statistics, a qualitative analysis of the quotes from non-commercial tech industry voices revealed four narrative patterns.

1. Defining ‘Expertise’

While the NYT often refers to many of its sources as 'experts,' there is a subtle distinction in how it applies this term. ‘Experts’ quoted on a specific technology’s capabilities were typically insiders from the commercial tech industry, while the ‘experts’ from outside the industry (civil society organizations, academics, non-profit leaders) are featured less frequently and were typically posed as skeptics or contrarians, despite the substance of their claims.

For instance, various articles referred to “AI experts” as individuals from the commercial technology industry, while another article referred to a University of California Berkeley professor as an ‘outside expert.’ The qualifier ‘outside’ and lack of qualifier for ‘inside experts’ subtly reinforces assumptions about who is considered an expert. In one article, the author mentions civil society groups and academics:

As tech leaders capitalize on anti-China fervor in Washington, civil society groups and academics warn that debates over competition for tech leadership could hurt efforts to regulate potential harms, such as the risks that some AI tools could kill jobs, spread disinformation, and disrupt elections.

In this example, industry ‘insiders’ are named directly as ‘AI experts’ while civil society voices and academics – whose names are left out – are reduced to reductive narratives about AI harms. This pattern reflects a broader issue in AI reporting where civil society organizations are reduced to their concerns about technology, which overlooks the true substance of their comments and recommendations.

2. Industry Voices Eclipse Non-Industry Voices

When non-commercial tech industry voices are quoted, the narrative often shifts back to center the commercial technology organizations, leaving little opportunity for thoughtful and informative journalism. The article “X’s Lawsuit Against Anti-Hate Research Group Is Dismissed” updates readers on the verdict of a lawsuit by X (formerly Twitter) against the Center for Countering Digital Hate (CCDH), an anti-hate research group that was sued for “violating X's terms of use” while studying anti-semitic content on the platform. Imran Ahmed, the chief executive of the CCDH, is quoted affirming the CCDH's right to research. However, instead of discussing the encroaching restrictions on digital human rights researchers (including restricted access to APIs on Reddit, NYT, Meta, and downstream effects of investigating online harm), the journalist pivots focus toward Elon Musk's other lawsuits. These ‘small’ examples illustrate bigger patterns in how NYT coverage of AI centers commercial technology industry players (and their ongoing ‘storylines’).

3. Use of Vague Language and Terminology

The NYT appears to lack clear and consistent definitions of specific AI technologies and relies instead on loose definitions and, often, hyperlinks to vague or outdated articles. Within the dataset of articles we analyzed, the term ‘chatbot(s)’ occurred 180 times, while ‘large language models’ and ‘natural language processing’ – the technologies that drive chatbots – occurred less than 20 times. In one article, the authors offer two hyperlinks on the words “large language models” and “chatbots” to two articles (published in 2020 and 2021) that fail to offer clear definitions and, instead, foster a narrative of uncertainty. A few hyperlinks direct readers to the “Artificial Intelligence Glossary: Neural Networks and Other Terms Explained,” in which the sub-title reads, “The concepts and jargon you need to understand ChatGPT.” Despite the effort to offer clear definitions, the language used in this glossary, once again, centers on the commercial tech industry (i.e., OpenAI’s commercial LLM application, ChatGPT). Using obscure language preserves the power structures that benefit technology developers and their respective organizations. Meaning not only is the language unclear for audiences, but it also acts as an effective mechanism for maintaining corporate power.

4. The Hero vs. Villain Trope

Finally, we found that the NYT’s reporting frames some actors as benevolent protagonists and others as villains. This dynamic plays out between Anthropic vs. OpenAI, the United States vs. China, and humans vs. machines. For instance, a core manifestation of the latter narrative is exhibited by stories that consider human replacement by AI-driven technologies. In “Will Chatbots Teach Your Children?,” the journalist writes, “And some tech executives envision that, over time, bot teachers will be able to respond to and inspire individual students just like beloved human teachers.” The image of your “beloved human teacher” being replaced by a SciFi-esque chatbot is vivid and part of a large narrative that journalists routinely and creatively leverage. The hero vs. villain trope is made up of smaller, recurring subjective patterns that assume expertise and trustworthiness.

Conclusion

The focus on individuals working for commercial tech companies, especially as quoted sources and not just covered subjects, suggests that the NYT’s reporting on the development of AI is influenced by the priorities and motivations of the commercial technology industry. While this case study focused on the NYT specifically, the findings are likely indicative of a broader paradigm across mainstream news outlets that future research should explore.

The commercial technology industry is not only building AI systems but also influencing public perception of AI, which is a phenomenon well studied by researchers in recent years. This paradigm is due, in part, to explicit efforts by companies to shape the public perception of their products and their role in society. However, it is also reinforced when media companies and journalists make decisions about who is featured when reporting on emerging technology. Thus, it prompts the question: what responsibility do journalists have in critically researching and interrogating the development of AI as a technology? How can journalists integrate the assumptions and ideological underpinnings that the industry and civil society – alike – perpetuate?

We invite you to contact the Computer Says Maybe team if you are interested in talking more about diversifying the sources that are cited in mainstream media, and improving the quality of research and expertise that makes its way into the public consciousness.

Authors

Hanna Barakat
Hanna Barakat is a Human-Computer Interaction and Science & Technology Studies researcher at Computer Says Maybe, a public interest firm focused on the politics of technology. Her research focuses on digital information networks and systemic bias against marginalized communities. She has previously ...

Topics