How Europe's Digital Omnibus Could Gut Privacy Protections
Itxaso Domínguez de Olazábal, Chiara Casati / Apr 1, 2026
© European Union 2015 - European Parliament.
A “move fast and break things” mindset is entering EU digital regulation.
For years, Big Tech companies have followed a simple idea: “move fast and break things.” Build quickly, fix problems later. This can sometimes work in software development, but it becomes dangerous when applied to people’s rights.
Today, personal data, often very sensitive, flows across many systems and is used to make decisions about jobs, credit, or access to services. When something goes wrong, the damage may already be done and hard to undo.
This is why the European Union created rules designed to prevent harm before it happens. Laws like the General Data Protection Regulation (GDPR), the ePrivacy rules, and the Artificial Intelligence Act require companies to build safeguards into their systems from the start. These rules focus on transparency, accountability, and risk management because fixing harm afterward is often impossible.
Proposals such as the Digital Omnibus risk moving the EU’s digital rules in the wrong direction. Framed as “simplification,” they could weaken key protections while increasing reliance on companies to police themselves. Development is proceeding rapidly, with limited evidence and democratic public consultation, raising questions about accountability and oversight.
The Council is now scrutinizing the so-called Data Omnibus and has put forward a first compromise text, while Parliament has approved its position on the AI Omnibus, launching the trialogue process that will determine the law’s final form. Taken together, these parallel tracks illustrate a broader trend: rules can be rolled back quickly, yet addressing the consequences of weakened safeguards can take years, if it is possible at all. Now more than ever, it is important to reiterate that the safest way to safeguard the EU's digital rulebook is to reject the Digital Omnibus in its entirety.
When harm becomes invisible
These changes may sound abstract, but they affect everyday life.
One major issue is how “personal data” is defined. Under the current GDPR rules, data that does not directly identify you by name, such as browsing behavior, can still count as personal information and be protected if it can be used to identify you or relate to you within a wider data environment.
For example, imagine a company replaces your name with a random ID and records that “User123 looked at a pair of shoes.” On its own, this may seem anonymous. But when combined with other datasets across the online ecosystem and its various value chains, like data from advertisers or analytics data, it may still be possible to identify you.
Today, this data is protected because of that risk. Under the proposed Omnibus changes, the company might claim the data is not personal simply because it cannot identify you, even if others can.
This means the same data could be protected in one situation but not in another, despite the risks being the same. The impact is significant for online tracking: websites routinely monitor users via cookies and device identifiers. If these are no longer consistently classified as personal data, much of this tracking could fall outside regulatory safeguards, despite still observing user behavior. In practice, individuals could be tracked across sites and apps, profiled in detail, and targeted or influenced based on those profiles, without the protections intended to limit such practices.
More automated decisions, fewer safeguards
The proposals also weaken protections around automated decision-making.
Today, decisions made by algorithms, such as whether you get a loan or access to benefits, are treated as high-risk and are allowed only under strict conditions. The new approach would make it easier to justify these systems as “necessary,” even when human alternatives exist. In practice, this could make automated decisions the default.
For example, a bank could rely on an algorithm to decide whether to approve a loan. Even if a human review exists, it might become a formality rather than a real safeguard. For individuals, challenging such decisions becomes harder precisely because the system is built to rely on automation from the start.
A recent example shows how this shift already works in practice. In 2025, Meta announced it would use user data from its platforms to train AI systems based on “legitimate interest” instead of asking for explicit consent. Most people were not aware that this was happening. Opting out required navigating complex steps, and civil society groups and, in some cases, regulators had to step in to help people exercise their rights. In practice, this meant that many users ended up contributing their data without ever making a clear or informed choice.
The proposed Omnibus rules would make this approach more common. By explicitly linking AI training and operation to “legitimate interest,” they risk creating a presumption that such uses are acceptable, even at a very large scale.
In everyday terms, it means that your data could be used continuously, behind the scenes, in ways you do not fully understand. And while you may still have the right to object, using it could be so difficult that, for most people, it barely works in practice.
Less for the worse?
The proposals also weaken protections for sensitive data, such as health information or political views. Companies could argue that removing such data from AI systems is too difficult or costly.
In practice, this means sensitive data might continue to be used, not because their use is justified, but because they have already been embedded in complex systems and are difficult to remove.
The same logic applies to the AI framework. One proposal would delay obligations for high-risk AI systems.
In practice, companies are still allowed to build and launch AI systems, even in high-risk areas like hiring, credit scoring, or public services. But if the legal obligations are delayed, they don’t yet have to comply with key safeguards designed to assess risks, document system performance, and enable accountability.
With this gap, companies can roll out systems without fully complying. The result is that AI tools affecting important life decisions — like whether you get a job interview or a loan — could already be in use before safeguards kick in. By the time the rules finally apply, these systems may be deeply embedded, widely used, and much harder to fix or remove.
A shift toward self-regulation
Taken together, these changes shift responsibility away from clear legal safeguards toward company self-assessment.
This creates a fundamental problem: most people, even regulators, have little to no visibility into how their data is collected or used. Without access to information, harmful practices are difficult to detect and even harder to challenge. Moreover, individuals are expected to detect and challenge harmful practices after the fact, often without the information or resources to do so.
In this environment, those with the most resources benefit the most. Large tech companies have the legal expertise, technical knowledge, and capacity to navigate uncertainty. Individuals, smaller organizations, and public interest groups are left trying to keep up.
Simplification, in itself, is not the issue: clearer rules and less bureaucracy could help everyone. But here, simplification risks becoming a cover for removing protections. And when that happens, the main beneficiaries are the very companies that built their success on a “move fast and break things” approach.
So the question is no longer abstract: do we want our fundamental rights to be treated the same way?
Authors

