Home

AI Charges Conversations at International Journalism Festival

Gabby Miller / Apr 26, 2024

Photograph by Alessandro Migliardi, IJF 2023. Source.

Last week, thousands of journalists, scholars, and others descended on Perugia, Italy for the annual International Journalism Festival. The nearly week-long affair, sponsored by the Google News Initiative and Microsoft, among others, featured more than 200 panels.

Of those panels, eleven were focused on artificial intelligence (AI), including my own. I moderated a panel titled, “The rise of AI: journalism after platforms,” which included my former employer, Emily Bell, the founding director of Columbia University’s Tow Center for Digital Journalism. Other panelists included Anya Schiffrin, director of the technology and media specialization at Columbia University’s School of International and Public Affairs (SIPA) and Charis Papaevangelou, a postdoctoral researcher at the Institute for Information Law at the University of Amsterdam.

Our session examined the relationship between publishers and technology platforms, specifically focusing on the shift away from social media’s global dominance in journalism and towards AI developers’ growing role in news media. This dichotomy is useful for considering what marks these eras as distinct, and setting up further interrogation of the ways some of tech’s largest players have adapted and used their outsized market power to take the lead in the AI race.

This transition is also taking place against a new global regulatory backdrop that affects tech’s relationship with the news media. This includes comprehensive regulatory regimes, like the European Union’s AI Act, that are centered around developing and deploying the technology itself. These laws emerged out of two decades of relative lawlessness that led, in part, to scandals such as the Cambridge Analytica data privacy incident and the kinds of failures described in the Facebook Files. If and how some of these newer regulations will remain applicable in the ‘AI era’ was central to both our panel, as well as one titled “Regulating AI in (and for) the news.”

The “regulating AI" session was moderated by Felix Simon, a doctoral student at the Oxford Internet Institute, and included panelists Emily Bell; Natali Helberger, professor of law and digital technology at the University of Amsterdam; and Ritu Kapur, co-founder and managing director of Quint Digital Media Ltd. The panel touched on whether AI technology should be regulated at all, and if so, who those regulators should be.

For Helberger, the existing legal, ethical, and safety issues around AI are leading many publishers, who are otherwise interested in the technology, to opt-out of any form of implementation. “The fact is that a lot of media organizations are not making their own LLMs, but they're fine tuning existing models or using other models,” Helberger said. She believes the role of regulation should be to make these tools trustworthy and reliable so that newsrooms are willing to adopt AI models for innovative purposes – a direct challenge to the widely held belief that regulation will stifle innovation.

The EU AI Act, which passed in March and will enter into effect in the coming months, was designed to act as “baseline regulation” that makes sure AI models “respect the rights of others, so rights to privacy, rights of children, copyrights,” Helberger explained. The Act also imposes responsibility on the media sector to “go further and self regulate,” she said. “It's really a cooperative responsibility to make sure that these tools are reliable and trustworthy.”

However, Kapur, a media entrepreneur from India, expressed concern about government regulation in countries whose governments are already leveraging AI to implement their own agendas. Just this February, Kapur cited an example where Google’s LLM, Gemini, responded to a journalist’s prompt that Prime Minister Narendra Modi has been “accused of implementing policies some experts have characterized as fascist.” This had lasting impact, with India’s IT Minister coming down with “sledge hammer regulation” that would require any AI technology to receive explicit approval of the government – regulation Kapur finds worrisome. “For somebody outside of a newsroom, a larger entity dictating what the regulation should be for a newsroom, that can completely crush press freedom,” Kapur said.

While Bell said the US currently lacks any meaningful regulatory tools beyond antitrust law, which is slow and can only be used after harms have occurred, she hopes big tech companies will adjust themselves to European standards. In the absence of new legislation, though, Bell warned against the narrative that it’s too soon to regulate AI, which she characterized as a “gaslighting mechanism.” “I'm not advocating for bad law, but the number of times in America I've heard lobbyists paid for by the tech companies saying this will make bad law,” Bell said. “And it's sort of, to Natali's point, that actually law is really how you get really great, it's how we got, I suppose, the internet in the first place. All of that was public innovation, not private innovation.”

Panelists also explored what governance looks like beyond laws. This includes journalism acting as its own regulatory mechanism, according to Bell. A key part of that self-regulation includes “responsible procurement,” or choosing the tools that journalists work with carefully, Helberger explained. Newsrooms should consider whether a model has been trained responsibly using lawful data, how transparent a provider is, and what technical support (if any) looks like for its users, among other questions. “I think we need to be much more critical in the kind of models that we decide to work with, and not simply go with the loudest and biggest just because, you know, everybody does,” said Helberger.

Other regulatory frameworks discussed during the “rise of AI: journalism after platforms” session included the news media bargaining codes, which were passed in Australia and Canada, the EU’s Digital Services Act and Digital Markets Act, as well as the lesser-discussed European Media Freedom Act (EMFA). Charis Papaevangelou drew attention to an EMFA provision concerning how designated Very Large Online Platforms, like Google, moderate editorial content. “They [European regulators] are framing editorial content as distinctively different from user generated content, so they inherently inscribe a different value in how bloggers should be treating content,” he said. “I think this sort of signals potentially a paradigmatic shift.” Schiffrin, adding onto his point, noted that “as quality information becomes more scarce, it's going to become more valuable. Basic supply and demand.”

The event’s organizers were far from the only ones excited about AI’s potentials and possible pitfalls for the news industry. Despite these well-attended panel discussions, with queues that often took nearly an hour to get through, other discussions, many informal, reverberated throughout the conference with a sense of urgent buzziness. New gatherings also emerged around IJF, like the Open Society AI in Journalism Futures 2024 project, which flew out around forty participants to a monastery in Northern Italy, for discussions on how AI and large language models could “fundamentally reshape the entire information ecosystem.” Their preliminary findings, which will be turned into a formal report in the coming months, were presented at IJF during a workshop on how the news industry might prepare for this “uncertain AI-mediated future.” The shape of the discourse at the festival was a clear indicator that although AI’s impact on the future remains unclear, the news industry is serious about innovating, adopting, and regulating these technologies presently.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics