How a Competition Commission of India Order on Meta Can Drive Big Tech Toward Privacy-Centric Solutions
Nayan Chandra Mishra / Jan 24, 2025On November 18, 2024, the Competition Commission of India (CCI) passed an order imposing a penalty of Rs 213 Crores (~$25 million) on Meta for misusing its dominant position through WhatsApp’s 2021 Privacy Policy under Section 4(2)(a)(i) of India’s Competition Act. Since 2021, Meta’s Privacy Policy expanded the scope of data collection to allow the company to share and process non-personal data (aggregated and anonymized) with its subsidiary companies (Facebook, Instagram, Messenger, etc.) on a ‘take-it-or-leave-it’ basis. As users didn’t have an effective option to shift to a different platform due to network effects, WhatsApp undermined its autonomy by forcing them to comply with the policy. This not only had an impact on competition by creating high entry barriers for competitors but also on users’ rights over their data.
The penalty marks the first time CCI has connected data protection and competition in its order, taking a cue from the integrative approach adopted in the European Union, where data protection law (GDPR) and competition law were connected together to establish violations. The case also shows a conundrum among regulators regarding the fine line dividing personal and non-personal data, given that India does not have a fully functioning data protection law on both types of data. This fine line gets further blurred when a platform forces users to mandatorily accept their privacy policy written in rather broad and vague terms. This leads to the creation of mixed data sets—where personal and non-personal data are often intertwined. The Commission, as a result, took a broader view by hinting at the potential utilization of personal data as opposed to only non-personal data (competition-sensitive data) by expanding the scope of data collection (see Paragraphs 182.1-182.11).
This article will look at the data protection aspect of the order and analyze the scope of strengthening due diligence by companies through a techno-legal solution to achieve a balance between user rights and business considerations.
How the Commission explained data protection issues?
The Commission observed the pivotal role of data in allowing ‘data-driven enterprises’ like Meta to improve their services and provide seemingly zero-priced products to users in exchange for their personal and non-personal data. The competitive strength and market power of such enterprises are now determined by the amount, diversity, and quality of data they control in their extensive repositories, which allow them to deliver more personalized and targeted advertisements and partner with third-party advertisers. This privilege is not available to competitors operating in isolated product silos. However, lack of transparency and lower standards for data collection, sharing, and processing affect users’ right to give informed consent, create high entry barriers for competitors, and reduce better alternatives for users. Similar conclusions were noted by the Kris Gopalakrishnan Committee, established by the Government in 2020 to prepare India’s Non-Personal Data Governance Framework.
The CCI, therefore, preferred to burn the root of this issue by giving more control to users over data and ordered Meta not to share WhatsApp data for five years for advertising purposes. For other purposes, CCI ordered that:
- Whatsapp’s Privacy Policy shall include a detailed explanation of data sharing and its purpose, linking each data type with its corresponding purpose and Meta’s subsidiaries.
- Sharing of data with subsidiaries for providing services other than Whatsapp services shall not be made a condition precedent for users to access the platform. Users should have the choice to opt-out through an in-app notification and also review and modify their choice through a prominent tab in WhatsApp settings.
While the order tries to foster competition by empowering users, implementing such an order where they are truly engaged in providing informed consent remains elusive due to the difficulty in tracking data and the complex nature of data sets and privacy policies. Several research studies confirm that most users fail to read/understand the detailed explanations of data-sharing practices in privacy policies. Moreover, a unique challenge ensues in India because it has neither implemented the Digital Personal Data Protection Act 2023 (“DPDPA 2023”) nor does it have a law on non-personal data to ensure big tech corporations effectively comply with data protection principles.
Even if one argues that India has at least developed a framework for personal data protection, in reality, however, data collection, processing, and sharing transcend beyond the silos of tenuous definitions of personal and non-personal data based on anonymization, thus making it difficult to govern mixed data sets from a single law. It is, therefore, preferred to adopt a balanced approach where users have standardized control over both types of data while allowing sufficient flexibility for corporations to innovate and avoid excessive state coercion later on.
What Could be a Potential Techno-legal Solution?
As big tech firms control both data types in their servers, the most effective way is to allow them to take the lead in building a techno-legal solution. The goal is to ensure users can continuously map out how and for what purposes their data is being utilized so they can give informed consent. This can be achieved through a transparent technological system, which embeds data protection principles (right to track, delete, erase, rectify, and update data) in its design and tracks personal and non-personal data across platforms by categorizing each type. It differs from the concept of consent managers under DPDPA 2023 in that it includes both data types and is operated by the data fiduciary itself and not a third party. Indeed, the evolved concept of consent managers might also handle non-personal data in the long run. Yet, this solution can still supplement any existing data protection model, as suggested by the Srikrishna Committee, albeit in the context of personal data.
To illustrate, a dashboard application can be built by companies where users can register to track how their data is floating in the ecosystem of multiple subsidiaries of a parent company or clients of a data harvester. As companies themselves are part of the process, integrating such systems into their computing infrastructure and tracking the intersection of personal and non-personal data will be easier, unlike third-party service providers. So if, as a consequence of the broad privacy policy, mixed data sets are being collected and processed for dual purposes (as in Meta’s case), the dashboard can categorize and sub-categorize (e.g., community, service, health, location, financial) those data sets for users to make informed decisions. It could also notify users in real-time and seek their explicit consent before transferring their data to another platform. Moreover, the opt-out option should be available at all times for users, which they can exercise in the dashboard itself. This will be especially useful for those users who might not use other platforms owned by the parent company. The idea of a dashboard is not a novel concept, and it has been developed by some big tech companies such as Spotify, Netflix, Microsoft, and Google in the form of privacy dashboards. However, most of them neither fully implement the data protection principles nor are they sufficiently simple for normal users to truly engage with such a dashboard.
A vital purpose of such technological solutions is to track complex data sets and data flows in a user-friendly manner with the help of data tracking tools, consent visualization, statistics, and flow charts. A user-friendly interface will inform users of their rights and allow them to easily navigate data usage across platforms, categories, and timelines instead of cramming through huge swathes of complex unstructured data and privacy policies. This is especially true in the context of data processing in machine learning algorithms. In addition, such an interface would promote data democratization, where people from different strata of society can engage with data and assert control over it. The proposition becomes significant from India’s perspective, where most people may not be technologically literate enough to understand their data rights,complexity of the broad legal language of privacy policies and unstructured data. Therefore, earlier, if a user had X amount of control over their data, now it would increase to X+Y, where the value of Y will be equivalent to the extent of putting them in a dominant position.
Interestingly, in 2022, US Senators Mark Warner (D-VA) and Josh Hawley (R-MO) came up with draft legislation titled the Designing Accounting Safeguards to Help Broaden Oversight And Regulations on Data Act (DASHBOARD Act) to impose a legal obligation on commercial data operators such as Meta, Google, and Twitter to disclose types of data collected, how it is being used and what is the economic value of such data. The Bill obligated the operators to provide users with the right to monitor and delete their data through a “clear and conspicuous mechanism.” If they failed to fulfill the same, it would amount to unfair trade practice under the Federal Trade Commission Act (equivalent to the Competition Act in India). Although this essay does not argue for similar legislation, it can give the regulators direction on how limited state coercion can be employed for implementing this blueprint at a broader scale.
Simultaneously, such a system would allow companies to continue processing data without regulatory burdens that might hamper innovation and ease of doing business in the country. Indeed, other stakeholders, particularly the government, can impose conditions in the form of legislation and regulations. However, instead of directing the private sector on every step, a broad framework of expectations can be shared with corporates through advisories, admonitions, and regular communications, which the latter can incorporate to fulfill data protection compliances without coercive measures. While there are endless possibilities for how such a technological system can be utilized in the long run, the basic idea must remain intact, which is to allow users to exercise their rights simply.
What are the Bottlenecks?
However, building such a system would be costly for a small data fiduciary and require a significant internal capacity to monitor such a complex process regularly. Therefore, this system can be initially narrowed down to those companies/data harvesters that export user data either to their subsidiary platforms, such as Alphabet, Meta, or third-party non-state buyers. Moreover, there will always be loopholes in a self-regulatory arrangement where a data fiduciary has the leeway to design such an application. The question would arise as to why one would want to invest in such a platform, which might prove harmful to their commercial interests. Even if this solution is adopted, data fiduciaries might disregard their privacy and competition commitments by limiting the information shared, avoiding legible explanations of the purpose for which data is taken, and designing a flawed solution to overwhelm users with information dumps.
Multiplication of unique user interfaces with multiple functionalities by different companies (as already seen in privacy dashboards) may also cause fatigue to the users. To address this, limited state coercion can be employed to ensure standardization of basic items in such a system (scrutability, risk factor, data flow, opt-out option, etc.), which might also allow interoperability in the long run. In any case, a solution where data fiduciaries have the flexibility to integrate a techno-legal design into their existing infrastructure will be preferred to avoid engaging with third-party consent managers and excessive regulations, as seen in the EU.
The Way Ahead
Although the landmark ruling has been challenged by Meta at India’s National Company Law Appellate Tribunal, the order signals a renewed approach by regulators to establish violations through the prism of data protection principles. This has opened up the scope for larger scrutiny by other regulators against big tech companies in India that might misuse their dominant position by circumventing their obligations. The implementation of India’s data protection standards under DPDPA 2023 and DPDP Rules 2025 in the coming months will further push data fiduciaries to relook at their existing efforts and try to press for self-regulation to remain under the regulator’s radar. Therefore, the techno-legal solution proposed in this essay offers a pragmatic path forward—encouraging companies to adopt transparent, user-centric systems that empower users, ensure regulatory compliance, and sustain innovation without overreach.