Facebook is an ad tech company. That’s how we should regulate it.
Nathalie Maréchal / Aug 5, 2021Another day, another Facebook accountability scandal.
On Tuesday, Facebook shut down the accounts of NYU researchers Laura Edelson and Damon McCoy on the grounds that their Ad Observer tool violated Facebook users’ privacy. At first blush, this seems absurd: the users in question voluntarily installed the Ad Observer plug-in in their browser for the express purpose of sharing targeting info related to the political ads they see on Facebook with the researchers.
Dig a little deeper, and it turns out that the “users” whose privacy Facebook is so concerned about are actually advertisers—that is to say, paying customers. It is reasonable to assume the company might not be enthusiastic about scrutiny of its customers’ ad-targeting practices, or any research that might identify more discrepancies between Facebook’s stated ad targeting rules and what the company allows in practice come to light. And so Facebook did what it so often does: it attempted to disguise its self-serving behavior as pious concern for user privacy. In this case, it's blindingly obvious that Facebook is hiding behind its 2011 consent decree with the Federal Trade Commission to shut down research on the company's Achilles heel: the utter anarchy of its targeted ads business.
Those of us who are concerned about democracy, civil and human rights, and corporate accountability in the tech sector should take this as evidence that the NYU Cybersecurity for Democracy group is on the right track. We need more scrutiny of the online advertising sector. In fact, we need to reframe the "social media governance" conversation as one about regulating ad tech. Facebook, Twitter, YouTube, TikTok and the rest exist for one purpose: to generate ad revenue. Everything else is a means for producing ad inventory.
Facebook launched in 2004 and started selling ads that same year. Since then, the company has gradually evolved complex systems to try to govern (and, just as importantly, be seen to govern) its user generated content. To be clear, this is necessary, important, and difficult. (The study of content moderation and content curation is also necessary, important, and difficult.) Unfortunately, Facebook and its competitors have not demonstrated anywhere near the same level of effort in their governance of advertising, even though this is how these companies make money.
How do we know this? Since 2015, Ranking Digital Rights has regularly evaluated major digital platforms’ transparency reports (among other disclosures). We expect the platforms to publish data about the number of pieces of content and of accounts that they restrict either in response to government demands or in order to enforce their own rules. As of this year, we also expect companies to include data about the enforcement of ad rules (related to both content and targeting) as well as data about the number and outcomes of appeals. The numbers themselves can be hard to interpret without further context, but their existence signals that the company has some kind of system in place for tracking government demands, user flags, content restriction actions, and appeals. Facebook does not include data about ad moderation in its transparency report (in fact, to the best of my knowledge, TikTok is the only major platform that does).
If reporting from Consumer Reports, The Markup, and others is to be believed, it seems Facebook persistently allows ads that blatantly violate its rules. Conversely, I hear from ad buyers that the company’s automated systems are so rudimentary that they rejected one advertiser’s ad for violating the Fair Housing Act—when in fact, the ad called for expanding housing affordability programs (the ad was eventually allowed after an appeal). Whether your concern is false negatives or false positives, the ad rules enforcement system seems broken.
This is particularly galling because it should be much easier to govern ads than user content. First, there are far fewer ads. Second, advertisers don’t have an expectation of immediate publication, allowing more time for review. Third, paid speech enjoys less protection under free expression standards, as reflected in the fact that every country in the world regulates offline advertising in some way. And fourth, companies can set their own rules to ensure that they are actually enforceable. Last week, Instagram announced that it would limit how ads could be targeted at users under age 18, presumably because it would cost more to actually enforce its stricter rules on advertising to minors than it would earn from allowing such ads.
Moreover, ads are explicitly excluded from the scope of Facebook’s (deceptively named) Oversight Board: this signals that the company thinks ads are somehow irrelevant to the Great Content Moderation Debate. Or more to the point, that it wants us to think that.
My colleagues and I have consistently argued that the focus on the moderation of user content has come at the expense of appropriate scrutiny of these companies’ actual business practices. While policymakers, civil society, and the general public are caught up in the drama of the latest suspension, takedown or deplatforming, Facebook and its competitors are raking in the ad dollars with scarcely any oversight or accountability. They will continue to do so as long as ad revenues exceed compliance costs.
To be clear, I am under no illusion that Facebook will submit to external scrutiny of its ad business—which is its entire business—voluntarily. Nor will its competitors. This is where lawmakers and regulators should step in, and I am hopeful that they will. Under Chair Lina Khan’s leadership, the Federal Trade Commission seems poised to finally regulate the ad tech industry. Congresswoman Lori Trahan’s (D-MA) Social Media DATA Transparency bill would force digital platforms to enable precisely the type of research that Facebook stopped the NYU team from pursuing. Senators Klobuchar, Warner, and Wyden have all expressed outrage at Facebook’s actions this week, and researcher Laura Edelson’s jaw-dropping claim that her account was disabled “hours after she had informed the platform that she and McCoy were studying the spread of disinformation about January 6 on the social media platform” are sure to draw scrutiny from the House Select Committee to investigate the attack on the U.S. Capitol.
These signals from policymakers give me hope, but hope means little without actual change. Civil society must keep up the pressure on Washington to regulate the ad tech industry, including through comprehensive privacy reform. Until such interventions take effect, tomorrow will be no different than today.