Home

The AI Safety Community Exists, But Its Impact Is Uncertain

Kevin Frazier / May 22, 2024

United Kingdom. Foreign Secretary, James Cleverly speaks at the UK AI Safety Summit at Bletchley Park. Picture by Ben Dance / FCDO / CC by 2.0

The Bipartisan Senate AI Working Group's recently released roadmap for artificial intelligence policy earned headlines for its focus on innovation. A comprehensive read of the document, however, makes clear that the Senators acknowledge that the development and deployment of AI presents both risks and opportunities. In fact, the roadmap includes a section titled, “Safeguarding Against AI Risks.” That section and more general concerns about unintended consequences from advances in AI reveals the political relevance and influence of the AI Safety community.

For every technological leap forward, there’s a group warning of its unintended consequences. Some warned that the printing press would be a net negative on society due to the deluge of information it promised to unleash. The radio was likewise the subject of fearful conjecture; more than a few speculated that the "compelling excitement" shared via the airwaves was undermining the ability of children to focus on their homework. The introduction of TV similarly led to predictions of worst-case scenarios, such as the “further vulgarization of American culture.”

In some cases, these so-called “techno-skeptics” have rightfully earned the admiration of future generations given their ability to accurately forecast unintended negative consequences. In other cases, such skeptics have been discredited. This latter scenario commonly occurs when the public realizes that their predictions turned not on any reasoned judgment but rather emerged from their own self-interest in preserving the status quo.

AI is no exception to this historical pattern. Individuals and institutions within the “AI Safety” community have vigorously contested the notion that AI’s pros outweigh its cons. Whether they will be upheld as brave forecasters of a gloomier future, perpetually defined as enemies of progress, or confined to the historical funny pages with Y2Kers is still up in the air.

Akin to those who identify as Progressive or Conservative, membership in the AI Safety community is not contingent upon completion of specific procedural steps--there are no card-carrying members (yet). Instead, whether someone finds themselves as a member of this crowd is more contingent on shared beliefs or, more accurately, shared concerns. That’s why precisely defining the AI Safety community is not possible--like a lump of play-doh, it takes the shape of whoever held it last. That said, it’s important to map out the contours of the community given that its members have proven more than capable of shaping the public debate around AI.

The AI Safety community is distinct from those who are merely uncertain about AI’s likely effects. In contrast to the plurality of Americans who anticipate that AI will have some, amorphous negative impact on society, members of this specific, albeit fluid community have more detailed and dire concerns related to AI. In particular, AI Safety subscribers fear that the technology will bring about the end of humanity, or at least exacerbate existing social, political, or environmental woes.

Some important aspects of the AI Safety community distinguish this collection of techno-skeptics from their predecessors and may justify regulators, industry players, and the public writ large lending weight to their dark forecasts. Unlike techno-skeptic groups of the past, the AI Safety community includes members from around the world, from a range of disciplines, and with varying degrees of financial and professional interest in the spread of AI models. This smorgasbord of stakeholders cuts against some arguments that might undermine the legitimacy and authority of the community. Those aspects, however, have not stopped all attacks on the motives and reasoning behind the community.

The international set of actors involved in the AI Safety community reflects, in part, the group’s emergence from pre-existing organizations concerned with long-term risks to society. Many members of the AI Safety community, for instance, found their way to that perspective by way of their participation in the Effective Altruism (EA) movement.

Effective Altruists or EAs, for short, emerged in the 2010s at Oxford University. What started as an effort to involve rationalist philosophies to philanthropy eventually morphed into a concern about any and all threats to the long-term viability of humankind. That concern explains their focus on AI. “[T]o varying degrees and on disparate timelines, nearly all of them,” according to Brendan Bordelon of Politico, “believe AI poses an existential threat to the human race.” EA chapters exist around the world, lending the AI Safety movement an international base of support.

Whether influenced by EAs, existential risk theorists, longtermism adherents, or a different source, skepticism of AI’s long-term effects has spread among policymakers in a litany of jurisdictions. Official actors in the United States, United Kingdom, EU, and, albeit to a lesser extent, Japan have expressed concerns that originated with the AI Safety community. This geographic diversity militates against categorization of the community as one seeking to further nationalistic aims.

Diversity of professional perspectives also characterizes the community. From computer scientists actively involved with the development of AI tools to law professors, the AI Safety community has also become quite interdisciplinary. Members of other fields may not have explicitly labeled themselves as members of the community but have raised similar doubts and called for similar regulatory approaches. Philosopher Luciano Floridi, for instance, leads an effort at Yale to analyze weighty ethical issues implicated by AI’s advances. Floridi’s project includes about a dozen students who may propagate a skeptical or, minimally, nuanced understanding of AI. Journalists have similarly coordinated to discuss the risks posed by AI, and some biologists have echoed sentiments discussed among more outspoken members of the AI Safety community. This aspect of the community shields it from being easily dismissed as an insular and exclusive bunch.

Individuals with varying levels of personal investment in the acceleration or deceleration of AI progress also find themselves within the bounds of the AI Safety community. In fact, several individuals actively working on the development and deployment of AI likely qualify for membership. The inclusion of those with the greatest understanding of the technology as well as the most to lose should it be subject to onerous regulations should give pause to those looking to dismiss it as merely a fringe perspective.

The constitutive parts of the AI Safety community indicate that it will be around for some time. Though individual members of the community may not have similar motives or backgrounds, their shared concerns about one or many significant AI risks suggests they will be willing to heave stones through the sails of those championing AI. Whether those attacks succeed in slowing or redirecting AI depends on several outstanding questions, including the extent to which the community formalizes and focuses on a specific agenda.

Early efforts by those concerned about AI’s worst-case scenarios have so fallen flat. A letter by industry leaders calling for a moratorium on the technology’s development drew headlines and received retweets but ultimately did nothing to slow AI’s momentum. Likewise, regulatory efforts to address AI seem to have not substantially altered the long-term ambitions of AI labs. As of early April, the Financial Times reported that both “OpenAI and Meta are on the brink of releasing new artificial intelligence models that they say will be capable of reasoning and planning,” which marks “critical steps towards achieving superhuman cognition in machines.”

It follows that the next few months will provide a meaningful test: is the AI Safety community just a collection of individuals and institutions with a broad set of somewhat vague concerns, or is the community a movement that can mobilize popular support for its aims? The answer will go a long way toward shaping the future of AI.

The costs of unchecked technological “progress” are expansive and explicit. Those costs have been made clear in recent years as evidence pours in regarding the negative effects of social media--effects that could have been stymied had there been a more focused and well-resourced community to push back against Facebook et al. Whether AI “progress” is the subject of meaningful societal discourse now or years later is, in part, up to those who count themselves as members of the AI Safety community today.

What’s certain, though, is that the AI Safety community is here to stay, and will produce a significant amount of research, media, and communications intended to make the public more aware of AI’s risks. As pointed out by Shazeda Ahmed et al., “The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI.” The ultimate effects of its influence remain to be seen.

Authors

Kevin Frazier
Kevin Frazier is an Assistant Professor at St. Thomas University College of Law, a Director of the Center for Law and AI Risk, and a 2024 Tarbell Fellow. He joined the STU community following a clerkship on the Montana Supreme Court. A graduate of the Harvard Kennedy School and UC Berkeley School of...

Topics