Home

As the AI Bubble Deflates, the Ethics of Hype Are in the Spotlight

Tania Duarte / Sep 20, 2024

Tania Duarte is founder of We and AI, a UK non-profit focusing on better AI literacy for social inclusion.

There are increasing signs that the AI hype bubble might be starting to burst, or at least starting to leak. A report from the US Census Bureau’s Center for Economic Studies gives some indicators that adoption, the diversity of use cases, and impact on employment are falling below expectations, and research on business applications show AI does not seem to be yielding a return on investment many executives had hoped. Shares of AI stocks have declined significantly in the last three months as investors appear to have underestimated the operating costs of generative AI systems and markets correct their estimates of AI potential productivity gains. Indeed, Gartner has predicted 30% of generative AI projects will be abandoned after proof of concept by the end of 2025, and users are even questioning whether the performance of models is getting worse.

As Dr. Kerry McInerney, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and a Research Fellow at the AI Now Institute puts it:

One of the major problems we’re seeing right now in the AI industry is the overpromising of what AI tools can actually do. There’s a huge amount of excitement around AI’s observational capacities, or the notion that AI can see things that are otherwise unobservable to the human eye due to these tools’ ability to discern trends from huge amounts of data. However, these observational capacities are not only overstated, but also often completely misleading. They lead to AI being attributed almost magical powers, whereas in reality a large number of AI products grossly underperform compared to what they’re promised to do. I think to some extent we’re starting to see greater reticence on the part of buyers who’ve been promised AI tools that simply fail to live up to expectations.

So, the true believers caught up in the promises and excitement are likely to be disappointed. But throughout the hype cycle, many notable figures including practitioners and researchers have challenged narratives about the unconstrained transformational potential of AI. Some have expressed alarm at the mechanisms, techniques, and behavior at play which allowed such unbridled fervour to override the healthy caution necessary ahead of the deployment of any emerging technology, especially one which has the potential for such massive societal, environmental, and social upheaval. The warnings have been there, and relate not only to the economic impact of the AI hype bubble bursting, but also to how misrepresentation of the technology’s current potential can be harmful in many other ways, impacting many different interests. These include significant ethical considerations related to hype which have not been properly considered even as organizations and public bodies purport to advance safe, responsible, trustworthy, and ethical AI.

What do we mean by AI Hype?

At We and AI, a non-profit organization working to encourage, enable, and empower critical thinking about AI, our vision is for the kind of AI literacy which equips publics to think critically about when, how, and under what circumstances AI should be used. Our aim is to enable a greater number and diversity of people able to contribute to decision making about AI and its applications, in line with their interests and values. It has become clear to us that such critical thinking is dramatically impeded by exposure to inaccurate information, especially when it is delivered confidently and compellingly by AI executives and other influential figures. As a result, we wished to draw more attention to the motivations and mechanisms that impede public literacy about AI, and took the opportunity to work with the AI and Ethics Journal to put out a call for papers seeking research and perspectives to highlight the scope, scale, and consequences of the phenomenon of what is commonly referred to as AI hype. The resulting twenty submissions have now been published as the latest quarterly edition of the AI and Ethics Journal, as a Collection on the Ethical Implications of AI Hype.

Our first challenge in editing the collection was to differentiate between different usages of the word “hype” in order to be precise about the specific phenomenon we are concerned with. The word exists in noun and verb form, and can refer to the amount and energy of media discourse and coverage of AI, and also the practice of generating interest for promotional purposes. Our specific concern is primarily related to the second usage, but most significantly, where the excitement and enthusiasm is characterized by the “extravagant exaggeration” which defines hyperbole, which results in misrepresentation and overinflation of AI capabilities and performance: “the staunch overestimation, exaggeration, and plain misrepresentation of AI found in news stories, press releases, and interviews that go beyond excitement into a realm of fantasy, manipulation, and speculation as fact.” Or in short, misdirection and fallacy, which we consider to be inherently harmful to publics and society by interfering with capacity for self-determination.

Having worked in tech startups, I understand the importance of excitement and enthusiasm for sparking innovation and the energy required to build new solutions. But there is a line which is crossed when traditional marketing, advertising, and communications activities result in at best the development of harmful mental models of AI (such as is explored in the articles in the collection on anthropomorphism) and at worst deliberately misrepresent data and technical results. Some of these practices may be evident in other industries, but what makes them more harmful in relation to AI technologies is the current lack of AI literacy, the huge power imbalances which exist between technology companies and impacted communities, and the rapid releases which have outpaced regulation. These combine to create an environment in which people are particularly vulnerable to manipulation and misrepresentation.

And because some media coverage has been sensationalist about the extreme risks of AI, some articles explore how narratives related to existential risk from AI can be seen as the other side of the hype narrative coin. This is a phenomenon that Dr. Nirit Weiss-Blatt, one of the guest editors of the collection and author of the book The Techlash, has been exploring for some time. As she put it:

There’s a lot of hyperbolic terminology in AI discourse (e.g., God-like AI, Superintelligence). This AI hype distorts media coverage and public knowledge, resulting in misguided political decisions. We need a better understanding of what AI can and cannot do if we want the proper guardrails. In Prof. Milton Mueller’s words, 'If our threat model is unrealistic, our policy responses are certain to be wrong.' This heated debate requires more voices and analyses, which is why the AI and Ethics topical collection is so vital.

Building a research evidence base

This is the reason why the manner in which discourse about AI is conducted – whether in the media or the workplace – becomes an urgent ethical question. We hope that the collection can mark a moment in which it has become clear why more attention needs to be paid to patterns of discourse and communication.

We were thankful but not surprised by the wide range of manuscripts which were submitted looking not just at the risks of underperformance and misfitting we are now seeing documented in studies of live generative AI use cases, but also at how attempts at "Responsible AI" are undermined by irresponsible positioning of AI. Original research, commentaries and research articles from researchers from a variety of different disciplines explore the extent to which hype influences public interest, business performance, investment decisions, and political agendas. Some address the complexities of accurately discussing AI in critical fields such as cybersecurity, healthcare, autonomous weapons and justice, as well as exploring the culture and incentives which encourage the leap from exuberance or concern, to actual deception. There are many lessons to be learned in terms of examples of the actual harmful impacts of hype on people and planet, and recommendations on how to navigate and decode ethical challenges related to communicating about AI. Due to the multiplicity of approaches to the topic, we grouped them into three (overlapping) themes.

In the section on ‘the landscape of AI hype’ there is an overview of the current state and consequences, with case studies on self-driving cars, law, and worker displacement, an exploration of how cold war narratives served to influence the trajectory of AI development in the past, and a personal retrospective from a computer scientist who has watched the discourse over the decades. Finally there is an examination of AI hyping as a social practice.

The second group of contributions, the ‘mechanisms of AI hype,’ includes a breakdown of the types of sonic narratives in soundtracks which serve to evoke false ideas of AI capabilities. There are two perspectives on anthropomorphism; one looking at it as an intrinsic form of hype and fallacy and the other delving into the human tendencies to ascribe human characteristics and what remedies this needs. Terminology used for hyping AI is explored through an interrogation of the term “frontier AI,” and arguments for existential risk are explored through a critical discourse perspective. The next article looks at the mechanisms that lead to planetary and social-economic costs by cloaking the consumption of finite resources and the perpetuation of social inequalities. Then there is a look at the political relevance of visually representing AI accurately, and finally a look at AI hype through a marketing and promotions lens.

The final group of contributions is on the ‘manifestations of AI hype.’ These submissions uncover real life consequences, such as how AI hype impacts the LGBTQ + community, the cybersecurity risks business are exposed to due to overconfidence in generative AI, and the impact on regulation for misrepresented technologies, specifically autonomous weapon systems. The success of predictive AI used by law enforcement and judicial investigators and how they are presented to the public is examined, as is the effect of three different types of AI hype in healthcare. The method of encouraging the public to misattribute human-like traits to a humanoid robot via poetry is explored, and finally the use of the term “mind-reading” in neurotechnology is critiqued.

Committing to more responsible discourse about AI

We hope that the pointers on how to recognize, avoid, and challenge the misrepresentation of AI and the effort at uncovering of causes, manifestations, techniques, consequences and impacts of AI hype will spur action on finding better ways to accurately and responsibly discuss the technology. In 2021 we set up the Better Images of AI collaboration to address the ways in which stock images of AI reinforce myths based on science fiction, hide human agency and reinforce sexist and racist tropes. Our creation of a free image library of more accurate and inclusive pictures was underpinned by the work of researchers such as Dr. Kanta Diha and has resulted in inspiring some better ways to visualize AI. We hope that this new collection of research looking at other forms of discourse about AI can also have an impact and prompt everyone involved in discourse about AI to consider how they create, receive and interpret information.

It is crucial that organizations and individuals communicate about AI accurately. Whilst not something often spoken about in the responsible AI discourse, we intend this collection to shed light on its importance, and to make communications about AI as important a part of conversations about AI ethics, safety, and responsibility as other topics such as bias or privacy, which are often overshadowed by hype narratives.

We also believe that widespread commitments to more functionally accurate and contextually appropriate framing and communications are urgently needed. We will discuss how to make this more of a priority at a September 23 research webinar with presentations, interviews, and panels from the authors of the articles. Join us.

Authors

Tania Duarte
Tania Duarte is the Founder of We and AI, a UK non-profit focusing on better AI literacy for social inclusion, to facilitate critical thinking and more inclusive decision-making about AI. Their programmes include the Better Images of AI collaboration with BBC R&D and the Leverhulme Centre for the Fu...

Topics