Facebook, NYU, and the “risks” of public interest research
Rebekah Tromble / Aug 10, 2021Facebook’s move last week to suspend the accounts of NYU researchers shows us how Facebook thinks about the potential risks posed by independent public interest research, and why the only resolution is legislation that requires and enables transparency.
On August 3rd, Facebook announced that it had “disabled the accounts, apps, Pages and platform access” for the researchers running New York University’s Ad Observatory project. Facebook claimed that one of the tools associated with the project- the Ad Observer, a browser extension- was “scraping” information about Facebook users who had not consented to having their data collected or analyzed. The company suggested that the NYU tool was in violation of Facebook’s privacy policy and had to be shut down per the company’s consent decree with the FTC.
Each of Facebook’s claims about the project have since been disputed- by the NYU researchers and in extensive reporting by journalists such as Protocol’s Issie Lapowsky and Wired’s Gilad Edelman. Mozilla has also challenged Facebook’s assertion that the Ad Observer presents privacy concerns, noting that Mozilla itself extensively tested the tool before recommending it to its users. And even the FTC has chastised Facebook, calling out the company for misleadingly claiming that its consent decree required it to take action against NYU. More than 250 scientists, technologists, and private citizens have also signed an open letter- which I co-authored- calling on the company to reinstate the researchers’ accounts.
There are some who see Facebook’s logic, explaining why the company would be so risk-averse when it comes to academic research projects like this. For instance, Neil Chilson, former Acting Chief Technologist at the FTC and now a research fellow at Stand Together and the Charles Koch Institute, wrote a compelling thread on Twitter arguing that because of potential data leaks, “the Ad Observatory research project increased FB's legal risk even without a FTC settlement, and the FTC settlement heightens that risk dramatically.” He argues that something can always go wrong with third-party tools, even when designed and tested as carefully as the Ad Observer.
So I've been stewing on the swirl around the @Facebook / NYU's Ad Observatory / @FTC issue for a few days, and it just keeps getting further under my skin. This latest news triggers me. A THREAD. https://t.co/Ehj0TdxefA
— Neil Chilson (@neil_chilson) August 6, 2021
I have been talking with people at Facebook about this issue for years, and I have heard the same argument from company employees many times over. I understand it. But, ultimately, I’m not convinced. This is not a case of simply avoiding any and all conceivable data leaks and privacy risks. Instead, the company clearly also looks at the potential impact of transparency-minded, public interest research on its reputation and finances.
To understand this more clearly, let’s dig into the NYU case a bit more.
To begin, Chilson is absolutely right to note that the mere existence of NYU's Ad Observer does increase the company's risk, especially under the FTC consent decree.
But so does every other third-party use of Facebook data.
That includes everyone who has access to Facebook’s APIs. (Note that while businesses and developers can get approved for API access, academics cannot.) And it includes the countless actors continuously scraping the platform- most for private, commercial gain.
So why enforce in this case? Why draw the line here? Given the privacy protections NYU had in place and the specific use case for this tool, this seems one of the least risky applications Facebook might assess when it comes to third-party data leaks.
What's more, there was a straightforward way for Facebook to gain at least some clarity about the risk at hand: ask the FTC.
No, the FTC was never- and is never- going to say, "Approve away, Facebook! We commit to not coming after you should something go wrong with the Ad Observer." That's not the way these things work. But Facebook knew their public claims implying that the consent decree dictated this decision rested on shaky grounds, and they certainly could have asked for an evaluation of use cases like the NYU Ad Observer.
To be fair, the FTC may very well have ignored such a request. But that actually would have made Facebook's claims and its decision more defensible- allowing it to argue that without FTC clarification, they just could not take the risk.
Facebook’s executives knew all of this. The smart lawyers, policy and communications teams certainly knew that they could approach the FTC. But they chose not to seek such clarity, because the greater risk was that the FTC would say exactly what it has- that the consent decree not only does not prohibit NYU's work, but that it "does not bar Facebook from creating exceptions for good-faith research in the public interest..."
This makes it absolutely clear that Facebook has leeway to decide for itself where to draw the line. Yet, ironically, this is a significant problem for the platform. Because now, even if Facebook wanted to do the right thing and create an exception for good-faith, public-interest research, life gets harder for the company in at least two ways:
First, research like this will always mean more headaches for communications and other relevant teams (in NYU’s case, the teams working on ads). Independent scrutiny is a pain, and those who have to deal with it are usually going to fight it. (It's the wrong choice, but you can see why those involved would want to avoid having the FTC tell them that there is a choice in the first place.)
Second, someone has to decide what the updated policy should be- where and how to draw this new line between “good faith” and “bad faith” uses of Facebook data. It is much more convenient to be able to claim that your hands are tied. Now the company has to come out and more clearly articulate how much risk is too much risk and why one actor might be more trusted than another.
But that brings me back to the argument above. Yes, the Ad Observer and similar cases of good faith research present privacy risks. And, just as Chilson explains, those risks are in fact heightened by the mere existence of the FTC consent decree. But that is true for all third-party data uses. And it has always been true. Just as with other data access cases, Facebook can take measures to mitigate risk inherent in independent research (for example, by requiring regular independent audits and "pen tests" of research tools).
There are lots of options to make this workable. There always have been. But because independent research does not help the bottom line- and in fact might ultimately hurt it- it was, and remains, all too easy for Facebook to say, "We'd rather not."
Ultimately, the NYU case points to the need for greater regulatory clarity regarding third-party data access. Because as long as Facebook- and other big tech companies, for that matter- get to decide where to draw these lines, public interest research, transparency, and accountability are likely to lose.