Home

FTC Issues Report On Using AI to Address Online Harms

Justin Hendrix / Jun 17, 2022

Combatting Online Harms Through Innovation, FTC, June 16, 2022

The Federal Trade Commission (FTC) today issued a report outlining a series of concerns and warnings about the use artificial intelligence (AI) systems to address online harms, documenting how new technologies must be cautiously applied so as not to exacerbate a number of problems that are themselves often a result of automated systems and the ways they interact with society.

"Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, in a statement. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”

The report was produced to satisfy language in the 2021 Appropriations Act, in which "Congress directed the Federal Trade Commission to study and report on whether and how artificial intelligence (AI) 'may be used to identify, remove, or take any other appropriate action necessary to address' a wide variety of specified 'online harms.'" In that legislation, lawmakers referred specifically to "content that is deceptive, fraudulent, manipulated, or illegal, and to particular examples such as scams, deepfakes, fake reviews, opioid sales, child sexual exploitation, revenge pornography, harassment, hate crimes, and the glorification or incitement of violence," as well as "misleading or exploitative interfaces, terrorist and violent extremist abuse of digital platforms, election-related disinformation, and counterfeit product sales."

Indeed, in the social media context, the central challenge of the Congressional question posed here should not be lost: the use of AI to address online harm is merely an attempt to mitigate problems that platform technology — itself reliant on AI — amplifies by design and for profit in accord with marketing incentives and commercial surveillance.

Combatting Online Harms Through Innovation, FTC, Page 2

The report addresses several areas where AI systems might be used to address online harms, including:

  • Deceptive and fraudulent content intended to scam or otherwise harm individuals
  • Manipulated content intended to mislead individuals, including deepfake videos and fake individual reviews
  • Website or mobile application interfaces designed to intentionally mislead or exploit individuals
  • Illegal content online, including the illegal sale of opioids, child sexual exploitation and abuse, revenge pornography, harassment, cyberstalking, hate crimes, the glorification of violence or gore, and incitement of violence
  • Terrorist and violent extremists’ abuse of digital platforms, including the use of such platforms to promote themselves, share propaganda, and glorify real-world acts of violence
  • Disinformation campaigns coordinated by inauthentic accounts or individuals to influence United States elections
  • Sale of counterfeit products

The report includes a range of recommendations, and argues that despite the "intense focus on the role and responsibility of social media platforms, it is often lost that other private actors — as well as government agencies — could use AI to address these harms," including "search engines, gaming platforms, messaging apps, marketplaces and app stores, but also those at other layers of the tech stack such as internet service providers, content distribution networks, domain registrars, cloud providers, and web browsers."

The first recommendation is to recognize that AI detection tools are "blunt instruments," with "built-in imprecision," and that there is a danger to over-reliance on such tools. There are also considerations around the political ramifications of such systems, including tradeoffs such as "blocking more content that might incite extremist violence (e.g., via detection of certain terms or imagery) can result in also blocking members of victimized communities from discussing how to address such violence. This fact explains in part why each specified harm needs individual consideration; the trade-offs we may be willing to accept may differ for each one." There needs to be consideration of imprecision, context and meaning, and bias and discrimination, for instance.

The second recommendation revolves around 'humans in the loop,' or human oversight of AI systems. The FTC acknowledges that "[s]imply placing moderators, trust and safety professionals, and other people in AI oversight roles is insufficient." and that human oversight "also shouldn’t serve as a way to legitimize such systems or for their operators to avoid accountability."

The third recommendation addresses transparency and accountability, defined as "measures that provide more and meaningful information about these systems and that, ideally, enable accountability, which involves measures that make companies more responsible for outcomes and impact." A key plank of this recommendation is researcher access to platform data. The report puts forward assessments and audits and auditor and employee protections

The fourth recommendation revolves around responsible data science practices, urging that "[d]evelopers who fund, oversee, or direct scientific research in this area should appreciate that their work does not happen in a vacuum and address the fact that it could cause harm," and that researchers must take care to address unconscious bias.

The fifth recommendation addresses mitigation tools employed at the platform level, including suggestions from individuals such as Rutgers University Professor Ellen P. Goodman to use "so-called circuit breakers or virality disruptors," efforts to uncover coordinated networks and actors that produce online harms, and the amplification of trustworthy content to counter disinformation.

Additional recommendations address user tools, the availability and scalability of certain interventions beyond the major tech firms, and opportunities to address content authenticity and governance.

Finally, the FTC report addresses potential legislation, looking to "the development of legal frameworks that would help ensure that such use of AI does not itself cause harm." The FTC recommends that:

“Platforms dream of electric shepherds,” says Tarleton Gillespie, expressing skepticism that automation can replace humans in addressing harmful online content. Legislators and regulators with similar dreams should remain skeptical as well.

Congress should "generally steer clear of laws that require, assume the use of, or pressure companies to deploy AI tools to detect harmful content" or that may not survive First Amendment scrutiny. But the FTC says there should be "three critical considerations" for any law that does address the use of AI to address online harm: "definitions, coverage, and offline effects." The FTC favors prioritizing legislation focused on "the transparency and accountability of platforms and others that build and use automated systems to address online harms," and notes the recent proposal from Stanford professors Nate Persily and Deborah Raji that eventually informed the proposed Platform Accountability and Transparency Act, put forward last year in the Senate.

In general, the report represents a consensus of sorts on these matters from civil society and academic researchers, who have put forward many of the ideas and proposals collected in the document and backed their considerations with empirical research. (Notably, Tech Policy Press has multiple citations). Whether Congress will act on these recommendations is an open question.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics