A Better Approach to Privacy for Third-Party Social Media Tools
Chand Rajendra-Nicolucci, Ethan Zuckerman / Aug 31, 2023A robust ecosystem of third-party tools that complement social media platforms will require fresh thinking about privacy, say Chand Rajendra-Nicolucci and Ethan Zuckerman.
Kylie Jenner complains about Instagram’s new algorithmically-driven, video-heavy feed, sharing a post that says, “stop trying to be tiktok i just want to see cute photos of my friends.”
Ted Cruz berates then Twitter CEO Jack Dorsey for his control over the platform’s algorithms: “Who the hell elected you and put you in charge of what the media are allowed to report and what the American people are allowed to hear?”
Scientists facing harassment on Twitter leave the platform.
What do these three stories about social media have in common?
They are all examples of complaints and problems with the digital public sphere that could be addressed with third-party social media tools.
Central to the Initiative for Digital Public Infrastructure’s vision of a healthier digital public sphere are third-party tools which enable users to control their experiences on social media platforms. These third-party tools fill in the gaps between the experience a platform provides and what a user actually wants to experience. We think third-party tools are critical to moving past many of the hard problems and sticky debates of the digital public sphere. By moving some power from platforms into the hands of users and third-party tools they choose and trust—giving users more control over where and how they participate and what they see—we can satisfy conflicting visions for social media. It’s unrealistic to expect platforms like Facebook or Twitter to satisfy every user and stakeholder. Instead, entrepreneurs of all shapes and sizes can build tools that fill in gaps without stepping on toes.
A good example is Block Party, a company that provided anti-harassment tools for Twitter, giving users more control over their day-to-day experience and during moments of crisis. Founded by Tracy Chou, who designed the tool partly in response to her experiences as a target of organized harassment on Twitter, Block Party had tens of thousands of users and millions of dollars in funding, demonstrating that it was solving real and important problems for Twitter users. (Block Party is currently on indefinite hiatus because Twitter raised its API price to an unsustainable level.)
This vision for a robust ecosystem of third-party tools that complement social media platforms receives a common refrain: what about privacy? An individual user may choose to use a third-party tool and thus consent to it processing their data—what about the people who appear in a user's feed or interact with them? They didn't consent to their data being processed by a third-party tool. Isn't that a privacy violation? Also, what if the third-party mishandles data, maliciously or unintentionally, or has weak security?
There’s an elephant in the room when this objection is raised: Cambridge Analytica. In 2013, data scientist Aleksandr Kogan released a third-party app on Facebook called “This is Your Digital Life.” Kogan’s app took advantage of Facebook’s Open Graph platform to collect personal information on 87 million Facebook users and then transferred that data to political strategy firm Cambridge Analytica, who used it to craft marketing campaigns targeting US and UK elections. Facebook was subsequently fined $5 billion by the US Federal Trade Commission for privacy violations. As a result, platforms, activists, and policymakers often reference Cambridge Analytica when citing objections to sharing data with third parties.
We acknowledge that Cambridge Analytica’s actions were a clear abuse of privacy and that there are legitimate concerns whenever data passes from a platform to a third-party. The easy response is to say “Trust Facebook/Twitter/Tiktok, but no one else,” and demand data never be shared beyond the platform a user has a direct relationship with. Or if data is shared, to demand user consent at every step, making it practically impossible to pass any data to third-party tools that isn’t a user’s personal data (e.g., like content from people they follow). But there are downsides to this approach. Most relevantly, it gives the platform essentially sole responsibility for users’ experiences, cutting off the possibility of alternative services users might choose to act on their behalf. That’s an increasingly unacceptable outcome, as the risks of leaving control of the digital public sphere in the hands of a few corporations (or capricious billionaires) become clearer and clearer.
A contextual approach to privacy
We think there’s a better way of engaging in this conversation, starting with an idea called "contextual privacy." Cornell information science professor Helen Nissenbaum was the first to theorize it, defining privacy as the appropriate flow of information in a given context. Specifically, Nissenbaum ties privacy to contextual norms about what information is revealed and who it is revealed to. This definition contrasts with one-dimensional theories of privacy which focus on universal ideas of control, secrecy, and sensitivity. Nissenbaum shows that such one-dimensional theories often result in rigid and myopic analysis and application. Contextual privacy understands that who, what, where, when, and why are all important questions to answer when assessing whether privacy has been violated.
For example, is it a privacy violation if Walmart collects data without your consent about your online shopping habits and uses it for marketing and optimizing its product offerings?
If we take a one-dimensional approach to privacy, we might say yes: Walmart didn't ask for your consent to use that information, violating your right to control information about yourself. (More likely, you “consented” as part of a giant terms of service agreement you clicked through.) In contrast, if we take a contextual approach to privacy, we might say no: it's reasonable to expect stores we shop at to collect information about our habits for the purposes of marketing and optimizing their product offerings, whether or not we consent to the practice. Stores collected information about our shopping habits before the internet—it's a longstanding norm in this context. Your local grocer has long tracked purchases to figure out what to restock, where to place items, who to send coupons, etc.
Now, if Walmart starts selling the information it collects about your shopping habits to data brokers or combines that information with information from pharmacies it operates, contextual privacy suggests that we can reasonably assert a privacy violation: those information flows violate norms and expectations for the distribution of your data in this context.
It’s worth noting that contextual privacy does not only apply to the internet and is not simply argument by analogy. We’re focused on privacy in online spaces and analogies are useful for analyzing information flows in new contexts such as the internet. However, Nissenbaum’s original paper applies contextual privacy to many different kinds of information and contexts. For example, she applies it to HIV status, arguing that a sexual partner may be entitled to information about your HIV status, though the same demand by a friend is probably not warranted. And though your sexual partner may be entitled to that information, if they spread it arbitrarily, you may feel justifiably betrayed.
Let’s apply contextual privacy to third-party tools for social media. Do tools like Block Party violate people’s privacy? We can’t immediately reject data flowing from a platform to a user-chosen third-party as inappropriate: your follower sharing your social media data with a third-party tool is not like your accountant sharing your financial information with his drinking buddies. Instead, we must ask whether the third-party tool respects the informational norms of the context—i.e. the platform—data is flowing from. If it does, then contextual privacy suggests we can reasonably assert that your privacy has been respected.
Consider the informational norms of a platform like Twitter. When you create an account, share content, and interact with people on Twitter, you expect the people who follow you and the people you interact with to have access to your relevant information—e.g., your posts and profile data. If your account is public, you also expect that strangers and search engines can access some of your information. Additionally, you expect that Twitter will store your information and use it to recommend content, target advertising, and generally optimize its operations. (Not every user may share these assumptions, and there are edge cases, e.g., when people posting for small audiences involuntarily become public figures, but we think these are broadly reasonable expectations.)
If a third-party tool processes your data, stores it, shares it with people who follow you and people you interact with, and uses it to recommend content and optimize its operations, the third-party tool would match the informational norms of Twitter. Therefore, if you (or someone you follow or interact with) choose to use this tool, and if it behaves as expected, we would assert that the third-party tool does not violate your privacy.
Now, if the third-party tool shares your content with people who are not supposed to be able to view it, for example people you've blocked, the third-party tool would no longer match the informational norms of Twitter, and thus would violate your privacy. Similarly, if the third-party tool fails to implement standard security measures, putting data at risk of being leaked or stolen, the third-party tool would no longer match the informational norms of Twitter and thus would violate your privacy.
It’s important to acknowledge that contextual privacy has a status quo bias. It does not consider whether widely accepted practices are justified or desirable. We actually think that many social media platforms have deleterious effects on people’s privacy which should be addressed through comprehensive privacy regulation. However, our point is that under the current, less than ideal scheme, third-party tools do not automatically increase user's privacy risk over what they face when using platforms. Further, third-party tools offer the opportunity to renegotiate a user’s relationship with a platform, perhaps reducing privacy risks. Users have essentially no control over a platform’s privacy policies other than a decision to use or not use the platform. But a competitive marketplace for third-party tools could allow users to choose a tool that minimizes collection of their data while protecting them from cryptocurrency spam, for example.
Implementing a contextual approach to privacy
What would a contextual approach to privacy for third-party tools look like in practice? How do you ensure a third-party tool matches the informational norms of a platform? On an individual level, there are incentives to choose privacy-respecting third-party tools, since the tool will handle your personal data and likely data from people you care about (e.g., colleagues, friends, family). However, it’s difficult for most people to accurately assess a third-party tool on their own, as they lack the necessary expertise and access. Instead, platforms should implement a verification process for third-party tools as a condition of granting them access to their APIs. The verification process would review a third-party tool to ensure they match the informational norms of the platform. Most platforms that grant third-party tools access to their API have a similar process already. However, there are no guarantees about the quality and fairness of the process, nor are platforms required to grant API access to qualified third-party tools. For example, a platform could deny API access to a qualified third-party tool, citing spurious privacy concerns, in order to avoid competition or reduce costs. Regulation is likely required to ensure the quality of verification processes and reliable, fair access for qualified third-party tools.
Some skeptics may agree that such an approach could work for a public-by-default platform like Twitter, but question whether it would be appropriate for a platform like Facebook, where content and interactions are typically shared with a more limited set of people, such as your friends and family. We agree that more private platforms present additional difficulties, but we think our approach works for them as well. A third-party tool meant for Facebook would likely have to implement stricter policies to match Facebook’s informational norms, such as ensuring posts aren’t shared with people outside of the appropriate social graph and respecting the more granular privacy controls that Facebook offers. However, these aren’t impossible tasks—they are fairly standard software engineering problems.
In fact, we think our argument holds up at the extreme—end-to-end-encrypted messaging—where one person is sharing content with a single person or handful of people, with the most stringent privacy and security requirements. As proof, an approach similar to the one we propose here has emerged as the leading candidate for implementing European interoperability requirements for messaging services. Messaging services would expose APIs for third-parties to use, with verification processes to ensure third-parties’ compliance with security and privacy standards and to address challenges such as spam. This strongly supports the idea that a similar approach could be made to work in a privacy-preserving way on social media more broadly.
Conclusion
There is growing interest in third-party tools as a win-win solution to the problems of the digital public sphere. Platforms, activists, and policymakers have all pushed back in the name of privacy, arguing that letting information flow to third-party tools represents an unacceptable privacy risk to users. Their arguments rely on rigid and myopic notions of privacy which we show are ill-suited to this context. Helen Nissenbaum’s theory of “contextual privacy” provides a different frame for analyzing the privacy impacts of third-party tools and supports our assertion that third-party tools can be implemented in a privacy-respecting way. We think orienting privacy discussions around the question—Does a third-party tool match the informational norms of the platform data is flowing from?—provides a productive path forward for this promising approach to making social media work better for everyone.