Home

Donate

Using AI to Combat Mis/Disinformation – An Evolving Story

Anya Schiffrin / Aug 4, 2022

Anya Schiffrin is the director of the Technology, Media and Communications specialization at Columbia University’s School of International and Public Affairs. A team of her master’s students-- including Hiba Beg, Juan Carlos Eyzaguirre, Zachey Kliger, Tianyu Mao, Aditi Rukhaiyar, Kristen Saldarini and Ojani Walthrust-- assisted with research and writing of the below referenced report, published by the German Marshall Fund, for their Spring 2022 capstone project.

AI Startups and the Fight Against Mis/Disinformation Online: An Update: Source

In recent years, artificial intelligence (AI) has been touted as a promising tool to help combat the waves of mis/disinformation running wild online. As evidenced by recent events – the Russian invasion of Ukraine, the COVID-19 pandemic – the spread of mistruths continues to undermine public trust around the world and threaten democracy.

For our new report published by the German Marshall Fund, AI Startups and the Fight against Mis/Disinformation: An Update, my master’s students and I conducted interviews with 20 AI start-ups. Several of these firms were already surveyed in our 2019 publication which considered a similar set of questions. We wanted to get an update on the role of AI in the fight against mis/disinformation, explore the latest innovations and look at the evolution of the market for tech-based solutions, many of which use some form of AI and machine/deep learning for content moderation, media integrity and verification.

Our report focuses on action on the supply side, using AI to curb the flow of misinformation. Demand-side efforts that address consumers or audiences, such as media literacy training or evaluation and rating of the quality and trustworthiness of journalistic content, are also important to mitigate the impact of false information.

How do start-ups use AI to identify mis/disinformation?

First, let’s look at the different ways startups deploy AI to screen mis/disinformation. Some of the companies we interviewed use Natural Language Processing (NLP) in one of two ways: they either train an algorithm to classify assertions as true or false by exposing it to a large number of assertions manually classified as true or false; or, using a more widespread method, they focus on matching text assertions against information contained in a sizable fact-check database.

Another approach, adopted by companies like New York-based Blackbird.AI, relies on pattern recognition with Machine/Deep Learning – of which NLP is a subfield. Rather than attempting to classify information as true or false, this method trains an algorithm to simulate human learning, identify actor networks and analyze traffic patterns to spot accounts that behave as if they use a high level of automation or might be bots.

More sophisticated technology is improving efforts to identify false information, but the myriad actors spreading it also have new tools, making it harder to detect. A well-developed disinformation industry now sells services like bots and deepfake videos in specialized dark corners of the Internet.

How has the market landscape evolved in the past three years?

The tech companies we spoke to did not disclose their revenues, but our interviews suggest that the market for AI solutions has remained smaller than most of these entrepreneurs had anticipated. This is in large part because tech giants like Facebook, Google/YouTube, and Twitter, which were expected to embrace technology to stop the spread of mistruths on their platforms, still have little incentive to work with third party technology providers to combat false news – in fact, their business models continue to rely largely on a laissez-faire approach.

Generally, Facebook and Google prefer to develop their own internal solutions. While Facebook’s parent company, Meta, doesoutsource most of its content moderation to third parties, it largely builds its own moderation tools and is not working with the most of the AI start-ups we spoke to.

In the absence of major tech firms on their client lists, money flows to AI start-ups have been limited by Silicon Valley standards. In fact, our interviews, and information collected by Crunchbase, suggest that only four startups in this area (Truepic, Zignal Labs, Blackbird, and Logically) have received more than $10 million in investment since our 2019 report.15

To sustain their businesses, start-ups have pivoted and developed new business strategies. Many set their sights on the business-to-business (B2B) market, selling their services – security and mapping for governments; combating online extremism; monitoring brand safety; automated fact-checking – to customers that include insurance companies, large public entities, government and governments. Constella Intelligence, for instance, provide security services, analyzing abnormal patterns to detect emerging risks. Graphika uses AI to help create detailed maps of social media networks to discover how online communities are formed and how information flows across large networks. So far, the market for business-to-consumer solutions remains very small.

Other firms are starting to work with news outlets and provide automated fact-checking services or authenticate content provenance. Aside from private companies, nonprofits and academic institutions are also joining efforts to rein in mis/disinformation online.

A changing regulatory landscape

Artificial intelligence, and other technologies like content provenance verification and blockchain, can only offer part of the solution against mis/disinformation. Human intervention and moderation will remain critical in the fight against the spread of false information, not least because mis/disinformation is not primarily a technology issue.

The spread of disinformation with malicious intent is a by-product of social currents such as the political polarization that has intensified around the globe, including in the United States and the United Kingdom. In addition to undermining democracy, mis/disinformation also creates fertile ground for phishing schemes, credit-card fraud, fundraising for fake charities, identity theft and other nefarious activities. To those who thrive when public trust is shaken and confusion reigns, disinformation is a useful tool.

Powerful economic incentives currently support the status quo. Relying on market forces alone to solve the problem is not realistic, which is why there is growing recognition that regulations are needed to force tech giants to invest more in addressing misinformation.

On the regulatory front, Europeans are ahead of the United States. The European Union’s Digital Services Act, approved in April 2022, and the UK’s draft Online Safety Bill require platforms to conduct risk assessments and explain to regulators how they plan to mitigate the impact of harmful content. Germany’s NetzDG, introduced in 2017 and revised in 2021, imposes fines on tech companies that show a pattern of disseminating false content.

Encouraging tech companies to take action on the supply side remains the most powerful method to choke the flow of misinformation. Regulations must find the right balance between curbing harmful content and protecting freedom of expression while AI companies must keep in mind that authoritarian regimes could use their technologies to limit free expression rather than to improve content safety.

To what extent the expansion of regulatory frameworks will boost the fortunes of AI start-ups remains to be seen. New rules about online harm may spur demand for their services and create opportunities for further innovation in a field that is in constant evolution.

Authors

Anya Schiffrin
Anya Schiffrin is the director of the Technology, Media, and Communications at Columbia University’s School of International and Public Affairs and a lecturer who teaches on global media, innovation and human rights. She writes on journalism and development, investigative reporting in the global sou...

Topics