Orbán’s Hungary Defeat Shows Disinformation is Not a Political Magic Trick
Zsófia Fülöp, Szilárd Teczár / Apr 20, 2026
Hungary's Prime Minister Viktor Orban reacts after receiving the results of a parliamentary election in Budapest, Hungary, Sunday, April 12, 2026. (AP Photo/Petr David Josek)
When a politician seen as spreading more disinformation than their opponent wins an election, it's tempting to conclude that the falsehoods made the difference, and that fact-checking failed. When an election produces the opposite outcome, the temptation runs the other way.
We should resist both reflexes. The goal of counter-disinformation work is not to determine electoral outcomes. It is to give voters reliable information and the tools to evaluate suspicious claims — whatever they then decide to do with that information. Hungary's April 2026 parliamentary election illustrates the point.
On April 12, the state-backed disinformation machine of Viktor Orbán and his Fidesz party competed against the political newcomer Péter Magyar and his party, Tisza.
As fact-checkers, we check statements from both sides according to the same standards, but in our analysis, we should avoid false equivalences. Fidesz falsely accused its opponents of intending to reintroduce military conscription and send young Hungarians to fight in Ukraine, using manipulatively edited videos as “evidence”. Tisza, in its program, presented the price increases of some commodities misleadingly. These are not the same kinds of disinformation, and not only because the claims of Fidesz were echoed by dozens of government-controlled propaganda outlets and government-aligned influencers.
Still, Magyar and Tisza won resoundingly.
Should we pat ourselves on the back and take comfort in the knowledge that we battled disinformation effectively? It’s not so easy.
Assessing the effectiveness of fact-checking and other forms of counter-disinformation work against specific political outcomes completely misses their goal. Which is not to persuade people who to vote for, but to provide them with reliable information and help them verify suspicious claims they might encounter in the future.
More fundamentally, attributing an election outcome to the effectiveness or futility of disinformation is empirically unprovable. People form their voting intentions in much more complex ways than by hearing and believing false claims. The economic situation of their own and of the country plays a crucial part, as well as the personalities of the candidates. It also matters how relevant a specific piece of disinformation is. People who voted for Fidesz in 2022 might still buy some of their lies about Ukraine, but with the war going on for four years, they might find other issues, like the state of the health care system, more pressing when they make their voting decisions.
The most important lesson for policymakers from Orbán’s defeat is that they should not treat disinformation as a political sleight of hand which, if not confronted by an equally powerful counterforce, will invariably sway voters in the intended direction. Perhaps it’s time to tone down the securitized or even militarized rhetoric around the issue (think of hybrid warfare) and the excessive attention to electoral periods. Counter-disinformation work should be supported continually, not to prevent certain election outcomes but to give voters the chance to make an informed choice, whatever that might be.
On top of this basic lesson, we offer three further takeaways connected to three phenomena we observed during this election campaign: Russian interference, the use of generative AI and paid political advertisement.
When the press reported on Russian involvement in the Hungarian election campaign, it caused major concern. But when we encountered disinformation operations that were linked to known Russian groups, we were surprised by how weak the feared foreign interference looked next to the domestic disinformation network around Fidesz.
We analyzed several disinformation campaigns that were attributed to the Storm-1516 group by researchers at Gnida Project. The recipe was always the same: creating a fake news site and publishing articles featuring leading figures of the Tisza Party. Like Ágnes Forsthoffer, vice-president of Tisza, who was falsely connected to Jeffrey Epstein and his human trafficking circle with a lamely forged email. The fake news articles were so far-fetched that not even the pro-government media picked them up, and when suspicious Facebook sites ran them as ads, they reached 100,000 users at most.
If that’s all the Russians could do for their most important EU ally, we must question whether their influence was so decisive elsewhere. Aren’t we actually helping Russian disinformation by exaggerating it and portraying it as all-powerful, as leaked emails from the famous Social Design Agency suggested already in 2024? Wouldn’t it be more effective to point out the limits and sometimes the ridiculousness of Russia’s efforts?
AI-generated images and videos were omnipresent in the 2026 campaign – both the governing party and its opponents used them to spread their messages. The AI video about the Hungarian soldier who is shot in the head on the Ukrainian front while his crying daughter is waiting for him to come home became a world-famous example of how Fidesz tried to manipulate voters emotionally. Because that’s what these videos did: they were easily recognizable as AI, less likely to be taken as reality, but still, the fearmongering images stayed with viewers for quite some time.
An AI video campaign by a media outlet with ties to the opposition Democratic Coalition party created more realistic imagery about postal voters claiming to vote for Fidesz. While some viewers might have believed these AI-generated interviews really happened, the aim of the campaign was more to incite hatred towards postal voters (mostly dual citizens living outside of Hungary), even in those who recognized the videos as fakes.
These examples show that although AI-generated content and disinformation can go hand in hand, it’s not useful to conflate the two. AI content can contain false or misleading information; it can itself be misleading in the sense that people take it as reality, but more often, it is an illustration of a political message aiming to influence voters emotionally. Policymakers should bear this in mind when they think about regulating the use of AI in political campaigns.
When, in October 2025, as a reaction to new EU transparency rules, Meta and Google banned political ads on their platforms in Europe, some analysts feared that the ad-free environment would favor extremist content and disinformation, while smaller political parties and more nuanced messages would fail to reach their audience. This election demonstrated that in Hungary, the policy change had the complete opposite effect.
During previous campaigns, Fidesz and its proxies poured crazy amounts of money into online advertising, creating a carpet bombing of propaganda messages that was impossible to avoid. You literally couldn’t watch a YouTube video without also watching a pro-government political ad. Not this time. Although Fidesz allies and even Fidesz politicians managed to circumvent Meta’s ad ban, the overall volume of ads decreased considerably, while every analysis concluded that Tisza generated more engagement online than Fidesz. The campaign disproved the traditional wisdom that extremist content and disinformation always win the organic race for attention.
With the end of Orbán’s 16-year rule, we hope the normalization of the Hungarian information ecosystem will begin. But disinformation will certainly not disappear. We should be clear about what it is and what it’s not, assess its potential effects and limitations sensibly, and help citizens outsmart it, irrespective of political winds or electoral cycles.
Authors

