Home

Donate

Understanding the Modern Web and the Privacy Riddle

Aklovya Panwar / Nov 15, 2024

In recent years, countries have begun discussing and in some cases implementing online user verification requirements on social media to combat the anonymity used by perpetrators to hide their identities. Such verification processes can lead to collecting and storing sensitive personal information, which may be exploited for purposes beyond their original intent. Even though such verifications may be required by law, there is a need to balance them with the right to freedom of speech and broadly with the right to privacy. This shift towards greater scrutiny and control of user identity could set a dangerous precedent, enabling further erosion of privacy by normalizing extensive data collection and monitoring practices. And, there is no assurance that government agencies or platforms could not misuse verification data for profiling or other purposes that betray the user.

Standardizing online user verification on social media may exacerbate this risk. These developments come at a time when the economy and technical realities of the modern internet already pose significant privacy risks. These dynamics have created an environment where users are compelled to disclose personal information just to use the internet.

The web has undergone significant transformations since its inception. Tim Berners-Lee, the father of the World Wide Web (WWW), introduced the concept in 1989. He identified three developmental stages: Web 1.0 (the Web of Documents), consisting of static data pages; Web 2.0 (the Web of People), featuring dynamic data and user-generated content; and Web 3.0 (the Semantic Web or Web of Data), which is still in development and aims to connect data more intelligently. However, more recently, there has been a less-discussed stage—Web 2.5, or the Symbiotic Web (also referred to as the modern web). This concept, introduced by Dr. Paul Bernal, highlights the vulnerabilities of the current web, where users are constantly monitored by both private companies and government agencies. The modern web has turned user data into a valuable commodity, leading to the rise of “big data.” This digitized environment, filled with texts, emails, digital photos, videos, and other data, has become a stage where users perform and share content to gain attention and validation. The phenomenon of sharing has evolved from a neutral exchange of information into a performative act, with content being tagged with an identity marker.

In this iteration of the web, there is an unavoidable interdependence between individuals and commercial enterprises. Users receive free services like email, social networking, and search engines, while in exchange, they provide personal data. These agencies monetize this personal data through targeted advertising, profile building, and direct data sales. While this exchange has led to positive developments, there are significant risks attached to it. One is the introduction of new forms of surveillance, enabling omnipresent data mining and automated profiling by private or government agencies. To put it in context, the revelations from whistleblower Edward Snowden exposed that the US National Security Agency had virtually unrestricted access to vast amounts of data through its PRISM program, launched in 2007. The agency collected data from major tech companies like Google, Facebook, Microsoft, and Apple. According to the Washington Post, government employees with PRISM clearance could directly access data from these companies without needing further interaction with their staff.

Years after the Snowden revelations, governments around the world acquire more data than ever on people. The current environment represents a virtual panopticon, derived from the concept of the all-seeing panopticon envisaged by Jeremy Bentham. Every user takes on the role of guard and prisoner, depending on whether they are sharing or receiving content. This in effect puts them in a self-imposed equilibrium, where users regulate their behavior to fit into this new digital order unknowingly or knowingly.

The main question is users’ willingness to surrender their data and not question the usage of this data. This could be attributed to the effect of the virtual panopticon, where users believe they are cooperating with agencies (government or private) that claim to respect their privacy in exchange for services. The Universal ID project (Aadhar project) in India, for instance, began as a means to provide identity to the poor in order to deliver social services, but has gradually expanded its scope, leading to significant function creep. Originally intended for de-duplication and preventing ‘leakages,’ it later became essential for enabling private businesses, fostering a cashless economy, and tracking digital footprints. This creep has led to the UID becoming a mandatory requirement for accessing state assistance and paying taxes, and in many cases engaging in commerce, raising concerns over privacy violations.

To understand this, consider the theory of Scott R. Peppet in “Unraveling Privacy.” He argues that the world is facing a new threat to privacy due to the shift from a “Sorting Economy” to a “Signaling Economy.” In a sorting economy, an uninformed party filters out counterparties based on observable characteristics. In contrast, a signaling economy sees economic actors using signals to differentiate themselves for economic benefits. Peppet describes the phenomenon of “unraveling,” where users, driven by self-interest, disclose personal information for economic gains. Those who refuse to disclose their information may be stigmatized. This creates a situation where everyone is compelled to share their data, leading to a self-imposed equilibrium in this virtual panopticon.

Peppet introduced the metaphor of the “personal prospectus” to describe the signaling economy. This prospectus contains an individual’s verified personal information, compiled from various sources like bank accounts, educational records, tax history, and health records. It highlights the potential of the signaling economy and the shift in the importance of privacy, where personal data is considered an asset. Consider for example the Aadhar project, which has led to the collection of over 1.3 billion data points, including personal details and biometric information. Although the Supreme Court of India ruled that Aadhaar is not mandatory, the process of unravelling was already well underway, with many citizens having linked their Aadhaar identification to access essential services.

This phenomenon can occur under any government that requires user verification online. In such a scenario, the risk attached is the potential loss of individual autonomy to government agencies, because once the unravelling starts, users will be compelled to approve of verification. When governments have access to verified user identities, they can more easily monitor and track individuals' online activities, potentially infringing on personal freedoms and privacy rights. This can lead to a situation where users feel less free to express their opinions or engage in discussions that might be critical of the government, out of fear of surveillance or retribution.

For example, in countries with stringent internet controls, such as China, the government monitors and controls online behaviour through real-name registration systems. In China, social media platforms compel users to link their accounts to their national ID, making it easier for the government to track down and restrict content deemed unsuitable or threatening to national security. This approach has been criticized for suppressing free speech and limiting citizens' ability to engage in open and anonymous discussions, therefore undermining fundamental human rights.

In the modern web, users occupy multiple roles—as service providers, users, and visitors—while adopting multiple personas. This shift requires greater information disclosure, as users benefit from the web’s capabilities and treat their own data as currency. The unraveling of privacy has become the new norm, where withholding information is no longer an option due to the stigmatization of secrecy. Over the past few years, there has been a significant shift in how consumers and websites view privacy. Users have developed a heightened sensitivity to the use of their personal information and now recognize their basic right to internet privacy. However, this awareness may be in vain, as the signaling economy stigmatizes silence, compelling users to disclose their data thereby shaping virtual panopticon. As signaling becomes more widespread, it could lead to a situation where disclosure becomes the norm across the economy, making the act of keeping personal information private suspicious. This unraveling threat to privacy increases data fluidity on the internet, leaving vast amounts of data in unknown territories.

In such a state, formalizing disclosure of personal data for online user verification on social media poses a real threat to anonymity. There have been earlier attempts to mandate the real identity of users – for instance, short-lived Facebook Real Name policy, which mandated the use of users’ real identity on social media platforms. The policy faced backlash and Facebook had to relax it in the wake of criticism as it took away individuals right to use pseudonyms and maintain multiple digital identities.

The modern web, with its inherent vulnerabilities, already compels a user to share much of their information online. But allowing online user verification without any uniform policy can be fatal to the privacy and anonymity of users. In such a scenario – what can be the solution? One that comes to mind is to apply the model of “Contextual Integrity,” proposed by philosopher Helen Nissenbaum. According to Nissenbaum, privacy is maintained when the flow of personal information adheres to established, context-relative informational norms. These norms are governed by three key parameters: actors (such as the subject, sender, and recipient of the information), attributes (the types of information), and transmission principles (the constraints under which information flows).

On social media, the key actors include the user (subject), the social media platform (sender), and potentially other users or third parties (recipients). The context of social media platforms involves various aspects such as social interaction, public sharing, and personal expression. Verification processes need to align with the norms expected in this context. Personal data required for verification might include identity documents, phone numbers, or biometric data. These principles dictate how the verification data is collected, used, and shared. Nissenbaum’s theory offers a more substantive account of privacy by focusing on norms of appropriateness and distribution within specific contexts. Norms of appropriateness determine what types of personal information are acceptable to share in a given context. On social media, demanding extensive personal information for basic verification might be seen as excessive and intrusive, while only norms of distribution govern how that information can be shared or transferred to others.

Thus, verification data should not be used for purposes beyond verification or shared with third parties without user consent. Users should be informed about how their verification data will be used, stored, and protected, maintaining transparency about data practices. And where possible, new verification requirements – including laws – should be avoided.

Authors

Aklovya Panwar
Aklovya Panwar is a lawyer by training. He serves as a legal researcher at the High Court of Uttarakhand, assisting the Hon'ble Justice with analytical briefs and legal queries. Specializing in TMT Law, Intellectual Property Rights, and Constitutional Law, he focuses on the intersection of law, tech...

Topics