Home

Donate
Perspective

Tech Companies Must Rethink Public Data Sharing in the DOGE Era

Ben Neumeyer / Sep 18, 2025

Elon Musk speaking at the 2025 Conservative Political Action Conference (CPAC) in National Harbor, Maryland. (Gage Skidmore)

The consolidation of sensitive federal datasets under the Department of Government Efficiency (DOGE) has transformed the privacy threat landscape. By centralizing data once siloed across agencies, seemingly in disregard of legal requirements, and combining it with powerful analytics tools, the Trump administration has created an unprecedented capability for surveillance and political targeting. Monitoring of individuals’ movements and online speech has already created chilling effects for individuals and communities, and news coverage shows the administration is exploring yet more use cases that may serve its political goals.

While public debate has focused largely on the role of government data, a critical blind spot remains: datasets that tech platforms voluntarily make public about user activity or trends on their services.

These data-sharing initiatives mix altruistic and strategic aims: they further the business and policy goals of tech platforms, who wish to cultivate goodwill or respond to political or external pressure. However, the changed threat environment casts a new light on their potential for misuse.

Current initiatives predominantly focus on a small number of established use cases and privacy protections, including aggregated location and mobility patterns and user activity data, like trends in search or posting. However, their history is dotted by hard-learned privacy lessons, like the exposure of genetic data, individual search histories and sensitive military locations. Plus, platforms still pilot new initiatives, which come with new risks of data misuse. For example, Amazon-owned Ring is reportedly reviving a program to facilitate sharing live video data from its customers’ doorbells with law enforcement.

In this radically changed threat environment, the potential for misuse of data shared by platforms is much higher. As methods to match anonymized data back to individuals advance, the threshold for durable privacy protection is continually moving, and data shared by platforms is at risk of being ingested, combined with DOGE’s holdings and weaponized, enabling uses far beyond their original intent. Data does not even have to be re-identified to be put to use: trends related to movement patterns or disfavored speech could be used to prioritize areas or groups to target.

For users, these well-intentioned data sharing programs risk exacerbating the privacy threats felt under the second Trump administration along with the resulting chilling effects. For platforms themselves, these risks manifest as well: privacy scandals can lead to waves of investigations, legislative activity and regulator inquiries — which will intensify if the party in power changes in future elections. The resulting news cycles and loss of user trust can impact key consumer metrics and product roadmaps.

Given these risks, platforms who want to share data responsibly should re-evaluate their approach in response to this new environment. That includes revisiting their public data sharing practices and taking clear, public positions — whether by declining to continue them or adopting stronger governance measures. Recent precedents offer a playbook for how they may do so.

How should tech platforms respond?

Platforms are constantly balancing privacy protections with considerations like efficacy or crisis response, and have defended them publicly even when external pressure calls for increased data sharing. Examples in this playbook fall into two main buckets.

First, platforms may stop sharing data or decline when requested, particularly when it poses heightened privacy risks or lacks sufficient utility. During the COVID-19 pandemic, both Meta and Google made it their policy to reject government requests for user-identifiable location data. Meta noted in a 2021 whitepaper that it had declined multiple government requests seeking real-time location data to support quarantine enforcement, citing both privacy concerns and the limited usefulness of the available data. Google similarly stated that it would not disclose individualized location, contact, or movement data.

Even prior to the pandemic, platforms made similar calls under pressure. For example, in 2016, Uber resisted a proposed New York City requirement to share granular pickup and drop off data, warning that government-held mobility data could be hacked, misused or improperly disclosed. Clearly, platforms have precedents for drawing bright lines in response to data requests that pose significant privacy risks — especially when the value of the data is uncertain or when downstream misuse is plausible.

Platforms have also curtailed downstream sharing of data via contract terms. Under its previous leadership, Twitter restricted access to activity data available through its public and commercial APIs in 2016 after civil-liberties groups reported that developers were marketing surveillance tools to law enforcement. In response, Twitter updated its developer terms to prohibit the use of its data for surveillance by law enforcement or any other entity.

While Twitter’s actions reflect another tool for platforms, it is one with limited efficacy. Twitter’s response was reactive, occurring only after it knew of misuse, and the safeguards it announced depend on ongoing investment in monitoring and enforcement. To its credit, Twitter did both at the time and publicly explained its actions in a blog post. While these restrictions remain in X’s developer terms, multiple subsequent complaints alleged that Twitter data was being accessed via developer tools for surveillance uses, raising questions about the consistency of enforcement – and these restrictions are even less likely to be meaningfully enforced under Elon Musk’s leadership after his 2023 takeover.

Second, platforms can share data that has been adequately protected from the broadest scope of possible privacy threats. During both normal operations and crisis response, platforms have offered aggregated or anonymized datasets designed to balance utility with privacy. Recognizing that no single technique guarantees protection, these initiatives relied on Privacy-Enhancing Technologies (PETs), such as differential privacy, which can provide measurable assurances that data protections are suited to the threat environment. For example, a dataset can be anonymized via k-anonymity, which works by ensuring that any person’s information is indistinguishable from at least k others, making it harder to pick individuals or small groups out of the dataset.

COVID-era examples highlight this approach in action. Facebook released mobility datasets protected by differential privacy, including a k-anonymity threshold of 300 to minimize the risk of re-identification of individuals or groups like households– even if the datasets were to be combined with outside data. Google applied similar techniques to its Community Mobility Reports, explicitly noting that their intention was to protect the privacy of individuals as well as communities.

Separately, to address requests for contact-tracing support, Google and Apple jointly developed the Exposure Notification API, which enabled smartphones to relay info about potential exposures via Bluetooth without collecting GPS data or storing identities. Such design choices prevented sensitive user information from being accessible to governments or app developers. These examples show that platforms can be responsive to public stakeholders’ needs while incorporating robust, expansive privacy safeguards.

How do deal with the current threat environment

While these precedents show that platforms have the playbook to deal with ordinary risks, it may no longer be sufficient given the unchecked recklessness associated with the threat actors in the executive branch. In this context, platforms should reassess how they use privacy frameworks to ensure they are protecting privacy in their public datasets.

When tech platforms share data externally, they should update their threat models to explicitly consider pooling with sensitive government data downstream. This may require revisiting even long-standing programs through fresh privacy impact assessments, red-teaming exercises or other forms of stress testing. For example: Could data about people’s movements be overlaid with data about government service usage for spurious investigations? Could social media activity and trends be used to enhance monitoring of social media to support bad-faith deportation cases or crackdowns?

With an updated understanding of these risks, leaders at tech platforms can then reassess the risk-benefit tradeoffs of data sharing programs, weighing civic or policy objectives against the potential for surveillance and harm. Some programs may no longer be justifiable and may warrant suspension or reduction. For programs governed by agreements like developer terms, platforms can consider updating their restrictions or implementing stronger enforcement as part of a comprehensive approach.

For initiatives that continue, platforms should incorporate insights from updated risk assessments into their technical privacy protections. This may include applying stronger differential privacy techniques such as higher k-anonymity thresholds or restricting access to certain types of data.

Organizational and political risks

Public pressure and internal leadership dynamics can limit platforms’ ability to adopt privacy-protective stances. Declining to share data during crises often provokes criticism, even when decisions are made to protect users. During COVID-19, for example, platforms’ refusal to release individualized location data drew backlash from civil society voices who saw them as usurping government authority.

Additionally, regulators are increasingly willing to use their authority for political ends, which could lead to unwanted scrutiny into changes to data-sharing initiatives. The heads of the Federal Trade Commission and Federal Communications Commission have not been shy about leveraging their agencies’ powers to achieve the Trump administration’s political agenda, even when far outside their traditional remit. In such an environment, policies designed to protect vulnerable communities could be portrayed as partisan, discriminatory or unlawful, creating added risk for platforms that attempt to act on privacy or human rights grounds.

Mitigating both of these risks requires not only technical justification, but the presence of leadership that is committed to defending its positions. The prospects here are unclear: as executives who helped shape earlier approaches depart, new leaders may react differently to pressure to avoid antagonizing the current administration.

Imperfect solutions

Platforms will continue to balance competing priorities when they decide whether and how to share data publicly: they will have to navigate changing tensions between public benefit, risk of misuse, and short-term and long-term risks and goals. Precedents from the last decade show that platforms have the means to elevate privacy among these other considerations, even if they are imperfect in the face of the current threat environment.

Even with a thoughtful and well-calibrated approach, tech platforms may not be able to fully prevent data about their users from contributing to government surveillance and overreach: actors like the Department of Homeland Security can bypass platforms by purchasing data from less accountable third-parties such as data brokers. But this fact does not absolve the major platforms of responsibility: the decisions they make still matter, and taking a maximally privacy-protective approach can reduce the risk of harm, even if it will not eliminate it.

Adapting to the current threat environment by reassessing sharing practices, strengthening privacy protections and resisting political pressure is essential. Doing so protects users and their communities, upholds civil liberties and ultimately serves the long-term interests of the platforms themselves.


Authors

Ben Neumeyer
Ben Neumeyer is an attorney and policy professional whose work focuses on privacy and data governance. Ben previously worked at Meta, where he led policy strategy in product areas including location data, wrist wearables, and privacy infrastructure. He holds a JD from William & Mary Law School and a...

Related

Analysis
Republican Budget Bill Signals New Era in Federal SurveillanceJuly 2, 2025
Perspective
States Are Fighting Back To Defend Medical Privacy and Safeguard DemocracyJuly 16, 2025

Topics