Why Civil Society Is Sounding the Alarm on the EU’s Omnibus Rollback
Joshua Franco / Nov 19, 2025
European Commission President Ursula von der Leyen. Source
For years, the EU has taken a leading role in creating standards that protect our rights online. But the winds have now shifted, and under the guise of “simplification,” a corporate-backed wave of weakening digital rules is underway that threatens all of our rights - on and offline.
Digital and human rights advocates, including Amnesty International, have been documenting some of the human impacts caused by new technologies, and it’s clear from these that what’s needed more than ever is stronger rights protections. Despite this, this simplification agenda aims to roll these very protections back.
It is becoming increasingly clear that this process is inevitably leading towards the weakening of provisions of the AI Act and data protection, and perhaps much more. The Commission has also proposed a “Digital Fitness Check.” While we haven’t been told what this will mean in practice, it is most likely going to be an exercise to identify further laws to be “simplified”. All of this is being undertaken under expedited procedures without prior impact assessments to ask how individuals and communities experience or are harmed by high-risk and emerging technologies, on the preposterous basis that laws that protect our rights can be pared back without impacting our rights.
GDPR – what's at stake?
A brief overview of the human rights at stake shows that the EU is moving in the wrong direction. One of the regulations in the crosshairs is the General Data Protection Regulation (GDPR). If you think GDPR is about cookie banners – think again. This landmark legislation is one of the key ways in which not only Europeans, but people around the world, are protected against abuses of their personal data by Big Tech as well as states. Though enforcement has been lacking, the potential for this law to serve as a bulwark against the voracious appetite of Silicon Valley’s unlawful surveillance-based business model is vitally important.
Without proper data protection, our data can be harvested at will, used to profile us in discriminatory or unfair ways, repackaged, combined, analyzed and sold and resold onwards by a massive and complex web of data brokers, and online advertising companies. It can also be shared or sold to state authorities, who can use it to profile us, put us under unlawful surveillance, deny us our rights, such as to social benefits, or even decide whether to arrest or detain us. This not only puts all of our human rights at risk, but also threatens national security, as location data and other sensitive personal data about government and security officials – as well as everyone else – is put up for sale around the world, opening us up to blackmail and mass surveillance.
Implications on AI Act
Another key regulation being targeted is the AI Act, which provides guardrails for the development and use of artificial intelligence. The harms we all face from AI systems that this law could help protect against are massive. In Denmark, the authorities have rolled out a new AI-powered system to detect cases of fraud in their social benefits system. Instead of anticipated benefits, Amnesty International’s research found that – as is so often the case with such systems – human rights ended up being undermined.
The system used variables such as links to foreign countries, or “unusual” housing compositions to flag potential cases of fraud, and this ended up disproportionately targeting people of migrant backgrounds and anyone whose way of life deviated from what was deemed as the ‘norm’ in Danish society. These people, as well as other members of marginalized groups, wound up being subjected not only to digital surveillance using their personal data, but also invasive analogue forms of surveillance such as so-called “duvet-lifting” aimed at determining whether a person might be cohabitating with a partner.
These sorts of harms are precisely what the EU’s new AI Act should help prevent. In fact, we at Amnesty International believe such systems should be defined and prohibited as “social scoring” systems under the AI Act. If the Omnibus continues in the direction it’s heading, and the AI Act is weakened even before it’s fully operational, we may have even less protection against such systems, as proposed amendments will further weaken the already weak transparency requirements for high-risk systems – effectively allowing companies to self-certify whether an AI system should be deemed safe or not.
Nor is Denmark an isolated case, our research on the “digital welfare state” shows that human rights harms, especially to the right to social security and non-discrimination, are inherent in nearly all of these increasingly ubiquitous systems, including The Netherlands, Serbia, France, Sweden, and the UK.
And the human rights threats from AI don’t stop there. In Hungary, legislative changes paved the way for the use of facial recognition technology by law enforcement in a range of new contexts, enabling blanket surveillance on peaceful assemblies, notably Pride Marches in Budapest and Pécs.
To protect against all this, what’s needed is stronger regulation and stronger enforcement. Even a forceful implementation of the AI Act would leave massive gaps that need to be addressed. Despite a concerted push by civil society, the final text of the AI Act fails to protect people around the world from the export of dangerous tech whose use is prohibited in Europe, and fails to protect the rights of people on the move. But the omnibus clearly signals that the Commission is more interested in smoothing the way for corporate profits than doing what’s needed to close these gaps and protect our rights.
What it would mean for DSA and DMA
EU regulations also affect how large digital platforms impact our rights, and if the Digital Services Act and Digital Markets Act are brought into scope (such as through the so-called “Digital Fitness Check”) as expected, this could be a significant rollback as well. The risks from the profit-driven, surveillance-based, algorithmic curation of our online content cannot be overstated. Amnesty International research has demonstrated how this technology has contributed to ethnic cleansing against Rohingya Muslims in Myanmar, and grave human rights abuses against Tigrayan people in Ethiopia with Meta failing to moderate and, in some instances, actively amplifying harmful, discriminatory content on Facebook.
Amnesty has also repeatedly found that this technology – specifically TikTok’s ‘For You’ feed - can push children and young people into a cycle of depression, self-harm and suicide content. Young people in France interviewed for Amnesty research shared that TikTok served them an increasing stream of videos that normalized and even encouraged self-harm and suicide after they engaged with mental health-related content. Parents of children who had died by suicide described the horror of discovering the content TikTok was pushing to their children after they passed.
Rationale for stronger regulation
Digital rights regulations in the EU offer crucial – but inadequate protections against these sorts of harms. They need to be strengthened and enforced, not rolled back.
The whole simplification process is based on a flawed premise. The Commission seems to believe that rights are an obstacle to competitiveness and innovation, but real innovation means finding ways to make new technologies work for everyone’s benefit, without trampling on our human rights. The new wave of laws in Europe has started to make it possible to imagine a world where the power of big tech can be meaningfully constrained, where our rights to be free from endless profiling and discrimination can be a tool to rein in the abuses from states and corporate monopolies.
But instead of building on this progress, the Commission seems to be racing to appease corporate interests and build an “AI Continent,” tearing out the guardrails that protect our data – and therefore us – from being swallowed up by Big Tech’s voracious appetite for profits, at the cost of our environment, and our rights. We need to oppose this attempt to roll back protections in the name of simplification and tell the Commission that our human rights are not for sale.
Authors
