One Year On, EU AI Act Collides with New Political Reality
Caterina Rodelli, Sarah Chander / Aug 7, 2025Sarah Chander, Equinox Initiative for Racial Justice, and Caterina Rodelli, Access Now, are members of the #ProtectNotSurveil coalition.

Power/Profit—Clarote & AI4Media / Better Images of AI / CC by 4.0
In August of last year, the European Union’s landmark Artificial Intelligence Act entered into force, a world first in regulating the technology.
The law promised — if imperfectly and incompletely — to protect people from the most dangerous and discriminatory AI systems, while championing the “EU values” of trust, innovation, and fundamental rights.
A year later, the world in which this legislation was written is largely gone.
Since then, we have witnessed a dramatic shift in global and European politics driven by a transatlantic race for AI supremacy, a deregulatory agenda in Brussels, and a wave of militarization. These shifts aren’t background noise — they upend the assumptions that shaped the AI Act and force us to ask uncomfortable questions: Can we still talk about how “AI governance” can balance rights and innovation when those rights are no longer even part of the discussion?
The AI Act in the age of militarized tech
The AI Act was born out of a contradiction between two irreconcilable goals: regulating harmful uses of AI — especially in policing, migration, and surveillance — while aspiring to become a global AI superpower. By 2024, that contradiction could no longer hold.
After the publication of the Draghi report, which criticized Europe’s stagnating innovation and regulatory approach, the European Commission unveiled a sweeping deregulatory agenda that “simplified” the AI Act in the name of "competitiveness." By June 2025, Commissioner Henna Virkunnen confirmed that the AI Act’s few crucial safeguards could be diluted ahead of their 2026 implementation.
Meanwhile, the return of US President Donald Trump to the White House began with a pledge to invest $500 billion in private-led AI infrastructure and dramatically weaken US regulations. His administration also moved to purge so-called “woke AI” and accelerated the use of AI in surveillance, policing, and military operations.
In the EU, the same priorities are taking hold. Faced with pressure to compete globally, the EU is increasingly choosing revenue over rights. In the EU’s new Multi-Annual Financial Framework for 2028 to 2034, the Commission proposed massive increases in military and border budgets, while social programs face sweeping cuts. This means more public money for the technology, security, and military industries.
This redirection of public funds amounts to a taxpayer-sponsored blank check to the very industry that the AI Act was meant to regulate. Billions are being funnelled by the EU and member states into biometric surveillance at borders, predictive policing software, military-grade drone systems, and AI-powered crowd monitoring tools — all with minimal scrutiny and even less accountability.
The reality of AI governance
AI is not neutral. Its owners run a nearly trillion-dollar industry in which the largest government application is defense. From Gaza to the Evros border between Greece and Turkey, European funds have been leveraged by companies to support the development of AI technologies being used to control, target, and punish people. This is automated repression, and it’s booming under the EU’s watch.
What we are witnessing is not temporary tension — it's a revelation of what AI governance means in a militarized world.
Austria’s recent use of facial recognition to track climate activists and Hungary’s decision to legalize facial recognition at Pride marches are not one-off abuses. They’re previews of the future we are hurtling towards, where AI policy is dictated by military demand and private profit, not civil rights.
These abuses were ratified by European legislators. Under the AI Act’s current loopholes, law enforcement and migration control authorities benefit from vast derogations, while member states can invoke national security to bypass core protections. Predictive policing, risk-scoring in migration procedures, and biometric categorization based on proxies for race or ethnicity all remain alarmingly possible. Emotion recognition is also still allowed for use by law enforcement and migration officials. European states have continued to expand surveillance frameworks — particularly those that target migrants, racialized and marginalized communities.
Towards a tech policy for people, not the security industry
In moments like this, it becomes clear that EU policy mirrors the interests of those in power, and those interests are not ours.
We need to stop pretending that rights can be balanced against profit, or that expansive deregulation can coexist with dignity. This is not a fight for the best version of the AI Act. It’s a fight against a political agenda where surveillance, control, and extraction are sold as innovation.
That means rejecting the idea that competitiveness justifies cutting protections. It means pushing back, strengthening bans on mass surveillance, challenging the vast digital border systems Europe deploys to prevent migration, and holding governments accountable when they fund private surveillance with public money.
And it means something deeper too: we need visions of how we spend public resources that respond to the needs of everyday people, not the corporations shaping our world. Tech policy should be rooted not in military logic or market efficiency, but in care, equity, and justice.
The AI Act will only become fully applicable in August 2026. The next 12 months are pivotal. Civil society, journalists, researchers, and activists must treat this not as a moment of celebration, but as a critical window to resist the erosion of hard-won protections.
We cannot afford to sleepwalk into a future where “AI governance” is just a euphemism for automated repression. Tech legislation needs to work for people, not for profit.
Authors

