Home

'Consent or Pay' and the Future of Privacy

Richard Mackenzie-Gray Scott / Jul 18, 2024

A few months back, the European Data Protection Board (EDPB) adopted an opinion that concerns the future of individual privacy, specifically whether it continues to exist as a right in light of practices symptomatic of it becoming a privilege. In response to legal challenges concerning the processing of user data for behavioral advertising, Meta adopted a subscription model demanding users either pay for ad-free access to Facebook and Instagram or continue being exposed to data processing for behavioral advertising. In scrutinizing this approach, the EDPB provides insight into how online platforms can respect user consent. Despite the potential leeway it grants providers of large platforms to continue skirting data protection laws, the opinion has broader implications for privacy. These concern two prominent components of the interactions between individuals and the providers of online platforms: the responsibilization of users when navigating online platforms and the real-time exercise of privacy online. Both concern whether privacy remains a right or erodes into a privilege.

Following a request under Article 64(2) of the General Data Protection Regulation (GDPR), the EDPB adopted an opinion relating to the behavioral advertising that occurs on some online platforms and users being provided options to opt out. The opinion provides numerous insights on user consent (see paras. 66-178), highlighting the problematic nature of online platforms adopting models that present individuals with the option of having to pay an access fee or be denied access unless they agree to their personal data being processed.

This stance accounts for the power asymmetries between individual users and the providers of online platforms, particularly with respect to how this relationship influences individual choice. As the EDPB emphasizes, data subjects should “have a real freedom of choice when asked to consent to processing of their personal data,” meaning online platform providers should not limit this autonomy “by making it harder to refuse rather than to consent” (para. 68). In order to freely do something or refrain from doing it, a genuine choice must exist for an individual to exercise their discretion how they see fit. The EDPB opinion aligns with this understanding: “consent can only be valid if the data subject is able to exercise a real choice, and there is no risk of deception, intimidation, coercion or significant negative consequences if the data subject does not consent” (para. 69). If online platforms provide different features or models to users that in effect direct them towards their personal data being processed, then consent is diminished to a point below adequacy where alternative choices are not viable.

Related Reading: ”Pay or Okay” — The Move to Paid Subscriptions on Social Networks

Freedom of choice depends, in part, on different options existing for people to choose or discard. The EDPB indicates that such freedom is lost if online platform users are compelled or pressured into making one decision over others (paras. 70-73). The Court of Justice of the European Union in the Bundeskartellamt judgment made a related point that consent can be invalidated should users suffer a loss for not consenting or for withdrawing that consent (see paras. 140-154). In building on this judgment, the EDPB was critical of the fee-paying subscription model as a means of providing users an “equivalent alternative” to being subject to data processing for behavioral advertising, particularly because "in order to ensure genuine choice and to avoid presenting users with a binary choice between paying a fee and consenting to processing for behavioural advertising purposes, controllers should consider also offering a further alternative free of charge" (para. 74). This position is significant, not least because of its potential to lessen the burden placed on individuals to protect their privacy when interacting with online platforms.

Moving away from responsibilizing users

A characteristic of legal frameworks applicable to privacy is how they currently place considerable onus on individuals to know about, understand, and exercise their rights. Although people value privacy, many individuals may not be able to meaningfully safeguard it if they use online products or services, in part due to regulatory shortcomings that place individuals in a continuous state of having to make uninformed (and apparently ineffectual) choices about their data. Privacy laws can end up not providing protection to individuals while they navigate online platforms, effects that are counterproductive to the stated aims of regulation. Whether it is being placed under a “duty to read the unreadable” (i.e., terms and conditions or privacy policies), opting in or out of whatever website settings, or changing cookie preferences, individuals face the challenge of “privacy self-management” when they traverse many online spaces.

These circumstances can be understood in terms of responsibilization, which involves transferring responsibility onto individuals for solving a problem that actors with more power are in a stronger position to solve. As Elena Abrusci and I note in some of our research, this approach to data governance requires “arguably absurd levels of awareness from data subjects” because “[i]n order to exercise a right, people first need to be aware of its existence […] Then they need to understand whatever right they think they have if they are going to be in a position to make any use of it.” These factors are in addition to numerous others, ranging from users knowing that they are subject to potentially unlawful data practices, to having the time, energy, and finances to pursue related inquiries, marshal evidence, and then embark on legal challenges regarding their data.

This divergence between the principles and practice of data governance has arguably contributed to undermining individual consent. Many users of online platforms do not know what they are consenting to during interactions with providers. The power asymmetries and bifurcation of knowledge involved in these relationships, combined with the actual or perceived need to use particular platforms, mean individuals are not in a strong position to withhold consent. This disproportionate burden being placed on individuals has helped reduce consent to “ticking a box or clicking a link.” Reinvigorating consent depends on corporations, governments, and intergovernmental organizations taking responsibility for making it a meaningful feature of the regulatory framework for online platforms, not shirking this responsibility and assuming individuals can make use of the concept without adequate assistance.

A promising contribution of the EDPB opinion is that it calls on providers of online platforms to supply such assistance, in this case by offering a non-fee-paying alternative for users to access online platforms without being subject to the data processing that underlies behavioral advertising. This approach, as the Board notes, enhances users’ freedom of choice (para. 76). It places users in a better position to exercise their right to privacy because no price tag accompanies the decision, which may serve to “remove, reduce or mitigate the detriment that may arise for non-consenting users from either having to pay a fee to access the service or not being able to access it” (para. 78).

Bolstering this ability of users to exercise their right to privacy online now concerns whether and how the alternative approach proposed by the EDPB is implemented. Considering the imbalance of power favoring online platforms to influence user decision-making, the ways in which options are presented to individuals online are decisive in determining what choice they ultimately make. When reading the EDPB opinion as a whole, it is evident that this matter was approached carefully, as the importance of digital design is underscored numerous times (paras. 36, 60-63, 79-81, 111, 154-155, and 165-169). Users cannot choose better options for protecting privacy if they are unaware of them or have been led to options that undermine privacy.

Deceptive digital design can create user interfaces that keep individuals in the dark about what data practices are occurring at their expense, including by default, and that users themselves may be able to change them. Such practices hinder the ability of individuals to make informed choices. The EDPB engages with this matter, stating, “The clarity of the different options to choose from should also be reflected in the design of the interface, as deceptive or manipulative design should be avoided in line with the principle of fairness” (para. 81). It is up to providers to change user interface design so that individuals can easily navigate online platforms without being manipulated towards making selections that undermine their privacy. To start with, this might look like platform settings that secure user privacy by default, presenting users with less box-ticking accompanied by complicated explanatory text, less color and placement patterns of links that make one selection more likely than another, and less interstitial webpages linking to policies that even the keenest lawyers may balk at reading. Progress on this front concerns ensuring that privacy remains a right and does not erode into a privilege.

Privacy in the future: right or privilege?

A further noteworthy aspect of the EDPB opinion is that it does not accept the consent or pay model outright. This result bears on whether privacy solidifies into a privilege when people interact with online platforms. A conflation that can occur when referring to specific human rights is that the formal existence of a right in a legal instrument means individuals can exercise and enjoy it. Yet it depends on the individual, especially the social relations between them and the actors owing the correlative obligations, which shape whether, to what extent, and in what ways the right at issue is enforced and implemented (see here for an example).

The day-to-day functioning of human rights intimately connects to personal privilege. Mary Anne Franks shows how a constitution may well stipulate that citizens have a right to free speech, but in practice, only privileged citizens can exercise it effectively without fear, sometimes to the extent that members of less privileged social groups end up being silenced. Another example concerns the human right to freedom of movement, which depends on individuals having the means to move, whether money, access to transport, or the “correct” passport. The point is that choice connects to privilege, which in turn determines whether and to what extent an individual has the actual option to make one decision or another, including those concerning the exercise in practice of their rights on paper.

An undertone of consent or pay models is that privacy should only extend to those who are able and willing to pay for it. This approach to governing online platforms punishes users who lack sufficient funds to protect their privacy while forcing people who may have the funds to make (potentially difficult) choices about expenditure priorities. Many people need to access certain platforms because their livelihood depends on them. Attaching financial or privacy costs to such needs has the potential to exacerbate socioeconomic inequalities.

For example, a self-employed woodworker who uses one online platform where they have established a solid customer base is experiencing financial struggles. Then the provider of that platform, upon which the woodworker has become dependent, starts demanding either a fee for access to this network of customers – partly built by the user’s efforts – or remain exposed to data processing that aims to ensure the next advert they see is from a predatory loans company offering “the best rates”, right at a time when they are financially vulnerable. Situations of this sort risk undermining privacy at scale to the extent that the right becomes a privilege for the purposes of practice, potentially helping “maintain occupancy on the lower rungs of societies’ socioeconomic ladders.”

A related matter needs to be mentioned here, one that goes to the very heart of debates about protecting privacy online. As asserted above, some people need to use certain platforms. These individuals are not really choosing to be used by the providers of those platforms that undermine their privacy, as other feasible options do not exist for them. Then there are people who have no such needs. Instead, they use certain platforms for different reasons: to continue chasing the addiction of attention, to feel part of some particular “in-group,” or to revel in the remoteness of online communication.

A bigger question, therefore, is, even when accounting for the switching costs involved, which are variable across users, does the consent or pay model actually present as a binary for every user, as the EDPB suggests, considering some users can quit certain platforms, or switch to others? While it looks like progress to have another option embedded in online platforms that grant users access without a fee and does not subject them to data processing for behavioral advertising, the fact that it took a series of legal challenges over the years to get to this point indicates where the priorities of some providers lie.

Users need not continue putting these priorities over their own. They can harness what power they do have, namely the power to reject inferior online products and services. Consent or pay – or leave. The latter was always an option for some. Making this choice practicable is thus crucial, especially for underprivileged users who may not have the luxury of leaving. A key subject is how to incentivize people to leave online platforms that undermine privacy. Perhaps to join alternatives that secure privacy from surveillance capitalism or to use that regained time offline, whether for work, rest, or sharing space among the birds, bees, and trees.

This article is based on research that received funding from the British Academy (grant no. BAR00550-BA00.01). The author thanks Daniele Nunes and Katie Pentney for their helpful feedback on the original draft.

Authors

Richard Mackenzie-Gray Scott
Dr. Richard Mackenzie-Gray Scott is Postdoctoral Fellow at the Bonavero Institute of Human Rights and St Antony’s College, University of Oxford, and Visiting Professor at the Center for Technology and Society, Getulio Vargas Foundation. He is the author of State Responsibility for Non-State Actors: ...

Topics