Over the past year, European debate has been filled with warnings that generative AI will “break” democracy and turn the 2024–2029 electoral cycle into an information crisis. Yet when we look at recent evidence, the bigger picture is far less dramatic. In the 2024 European Parliament elections, AI-generated content was only 1.1% of verified disinformation (Casero-Ripollés et al., 2025), and a field experiment on digital campaigning in Europe found that 17 million online impressions translated into less than a one percentage point change in vote share (Hager, 2019). The numbers do not match the mood.
This is not an argument for complacency about AI, nor a denial of the governance challenges it poses. It is a call to be precise about where the real democratic risks lie. The main threat to European democracy does not come from a new content‑generation tool, but from the slow erosion of the institutions that are supposed to protect elections: independent media, impartial electoral administration, and credible checks on executive power. If policymakers treat AI as the central danger, they risk fighting the most visible problem rather than the most consequential one.
A closer look at the 2024 European Parliament elections reveals just how wide the gap between narrative and data has become. A detailed audit of verified misinformation found that only about 1.1% of the cases involved AI‑generated content (Casero-Ripollés et al., 2025). Most false or misleading posts were still old‑fashioned fabrications or de‑contextualised material pushed by political actors around emotionally charged themes like migration and electoral integrity. In other words, the architecture of the problem hasn’t changed nearly as much as the headlines suggest.
The same pattern appears when we look at persuasion. One of the largest field experiments on digital campaigning in Europe, run during the 2016 Berlin state election, tested what happens when a party pours resources into online ads on Facebook and Google. The campaign generated roughly 17 million impressions. The result: a vote‑share increase of around 0.7 percentage points, at a cost of several euros per additional vote. That is not nothing, but it is a long way from the idea that a clever algorithm can decisively swing an election.
Taken together, these findings don’t absolve AI of risk, but they do help us scale it correctly. They suggest that generative tools are, for now, layered on top of a much older set of vulnerabilities: polarised public spheres, partisan media ecosystems, and parties willing to stretch the truth.
These deeper vulnerabilities aren't abstract — they're the concrete institutions that keep democracies stable. When political scientists examine why democracies decline today, they don't find sudden technological shocks. Instead, they see something much slower and more dangerous: constitutional retrogression — elected leaders gradually hollowing out the guardrails that should check their power (Huq & Ginsburg, 2018).
Think of Hungary and Poland over the past decade. Governments there didn't need AI deepfakes to weaken democracy. They stacked courts with loyalists, turned public media into government mouthpieces, used state resources to tilt elections, and harassed independent journalists and NGOs through endless lawsuits. These moves were all technically legal, individually defensible, but cumulatively devastating. The true pathology of democratic decline is this internal decay of the “immune system”—the hollowing out of the very courts, media, and legal guardrails designed to protect the body politic.
This is why treating AI as Patient Zero misses the diagnosis. Weak courts can't enforce electoral rules. Captured media can't provide balanced information. Polarised echo chambers make every lie land harder. Generative AI might make some of those lies prettier or cheaper to produce, but it doesn't create the underlying vulnerabilities. Those come from years of underinvesting: in judicial independence, media pluralism, transparent campaign finance, and basic civic education.
The pattern is clear: disinformation matters most where institutions are already fragile. In my analysis, fix those first, and no technology — however sophisticated — is likely to have the system-level impact that keeps policymakers awake at night.
If institutions are the real battlefield, then the current AI obsession becomes a dangerous distraction. Imagine pouring millions into new AI detection taskforces and emergency legislation while courts remain under pressure, public broadcasters turn into government bulletins, and political ad money flows opaque and unchecked. That's not strategy — that's misallocation.
This AI‑centric path also carries hidden costs. Alarmist rhetoric about "unstoppable deepfakes" doesn't just shape policy — it erodes public trust itself. Research shows misinformation warnings can make people less trusting of democratic institutions and more cynical about elections (Ognyanova et al., 2020).
Worse, new AI‑specific rules risk overreach. Regulating "synthetic media" sounds reasonable until you realise it could capture satire, memes, or even legitimate political montages. The technology moves too fast for perfect regulation anyway — by the next election cycle, bad actors will have new tools that slipped through yesterday's net.
Europe doesn't need another round of AI-specific laws. It needs to use the powerful tools already on the books — especially the Digital Services Act for online platforms, the AI Act — to target the vulnerabilities that actually matter. First, enforce platform accountability trough the DAS and the AI Act, which already contain the core obligations on supervision, risk management, and platform oversight.
Second, invest in democratic infrastructure. Fund networks of independent fact-checkers across languages and regions. Launch EU-wide programs teaching people how to navigate digital information environments — not just "spot deepfakes", but understand who's funding political ads and why certain stories spread. Support public-interest journalism that isn't beholden to governments or oligarchs.
Third, clean up political money online. Every political ad should live in a public library showing who paid, how much, and who was targeted. End micro-targeting loopholes that let campaigns exploit personal data for manipulation. Again, these are Digital Services Act deliverables, not new inventions.
This approach scales across technologies. The same measures that neutralise AI-generated fakes today will contain whatever comes tomorrow — because they target the institutional foundations that make manipulation matter: credible information from free media, transparent competition enforced by independent courts, informed citizens educated for the digital age. Democracies have survived technologies such as radio, television, and social media not by perfectly regulating each new platform, but by maintaining strong courts, free press, and public trust.
The choice is simple: chase technological shadows, or build enduring resilience. Europe knows how to do the latter. Now it just needs to choose it.
The evidence is now clear. Generative AI introduces real governance challenges, but it is currently not the existential threat to democracy that dominates headlines. The 2024 European Parliament elections showed AI-generated content as a minor player in the disinformation landscape. The Berlin experiment proved digital persuasion has measurable but modest effects. Meanwhile, the slow erosion of courts, media independence, and electoral transparency continues across member states — as seen in Hungary where government-aligned entities control 78% of political media revenues (Urbán 2019).
This is Europe's strategic moment. The Digital Services Act and AI Act already provide powerful instruments. The task is not to write new laws for every emerging tool, but to enforce existing ones with focus and urgency: hold platforms accountable under the DSA, fund independent fact-checking networks and civic education, and make political advertising transparent per the new regulation. These are not futuristic measures — they are practical steps that work against all forms of manipulation, AI-powered or not.
Democracies don't fall to algorithms alone. They erode when institutions weaken, and citizens lose trust. Europe knows this history. Strong courts, free media, and informed publics have weathered every previous media revolution. By prioritising institutional resilience over technological prohibition, the EU can build a democracy that adapts to whatever comes next — not just surviving AI, but demonstrating that liberal governance remains superior to manipulation in any form.
The path forward isn't complicated. But if we treat AI as the primary threat, we risk overlooking the underlying structures that enable any kind of disinformation to become politically consequential.
The debate doesn't end here. History cycles through the same challenges — new technologies, old institutional weaknesses. As Gal Costa and Gilberto Gil sang in "Divino Maravilhoso": "É preciso estar atento e forte." (Translation: “It is necessary to be attentive and strong.”) In today's information wars, vigilance and resilience remain democracy's eternal refrain.
~ The views represented in this blog post do not necessarily represent those of the Brandt School. ~
Casero Ripollés, A., Alonso Muñoz, L. & Moret Soler, D. (2025) ‘Spreading false content in political campaigns: Disinformation in the 2024 European Parliament elections’, Media and Communication, 13, pp. 1–12. https://www.cogitatiopress.com/mediaandcommunication/article/view/9525
Hager, A. (2019) ‘Do online ads influence vote choice? Evidence from a large-scale field experiment’, Political Communication, 36(5), pp. 375–397. https://www.tandfonline.com/doi/full/10.1080/10584609.2018.1548529
Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in action: Fake news exposure is based on low trust and reinforces it. Misinformation Review, https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power/
Higor Michael Dias Bopp holds a Law degree from the Pontifical Catholic University of São Paulo and is a Master’s candidate at the Willy Brandt School of Public Policy. As a Konrad-Adenauer-Stiftung Nachwuchsförderung stipendiat, he brings a Global South perspective to European policy debates, including governance and the energy transition.