Home

Donate
Perspective

AI Impact Summit Commitments Must Counter ‘AI Unreality’ in Politics

Lindsay Gorman, Sharinee Jagtiani / Feb 27, 2026

India's Prime Minister Narendra Modi, seventh left, poses for photographs with chief executive officers of various AI groups during the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (Indian Prime Minister's Office via AP)

The first AI Impact Summit held in the Global South last week shifted the focus from lofty conversations on catastrophic risk to scaled deployment. Humanoid robots, smart glasses with AI, “AI that understands your skin,” applications of AI in the classroom and intelligent farming filled Summit expo booths. Technology leaders collectively committed over $200 billion to strengthen India’s AI infrastructure, and the Summit’s declaration highlighted that wide-scale adoption of AI and AI‑driven applications holds unprecedented potential to accelerate economic and social development.

But for an aggressive AI build-out to succeed in realizing an equitable future, public interest technology that advances the core democratic values underpinning free societies must also be part of the deployment agenda. This includes the technologies that define the information and media environment.

In recent months, the rapid advancement of AI has been met with an equally swift surge of AI generated images and videos tracking major news and political developments. After the United States operation to capture Venezuelan President Nicolás Maduro, AI-generated images circulated online purporting to show him being escorted off a plane by two US law enforcement agents. From Iran, AI‑generated videos depicting both pro- and anti-government rallies spread across Instagram. One was viewed 60 million times. When President Donald Trump last month threatened to “take” Greenland “the easy way or the hard way,” he shared an AI generated image of the US flag flying over Greenland to illustrate the point. And in US domestic politics, AI videos depicted NYPD arresting ICE agents in the New York subway.

These instances are no longer a fringe phenomenon; AI slop is now a fact of political life. New tools such as OpenAI’s Sora and xAI’s Grok allow anyone with an internet connection to create and spread increasingly realistic fake depictions of current affairs..

The good news is that many policymakers see the threat and are developing common-sense regulatory approaches. When Grok-generated images of women and children undressed by AI proliferated across X earlier this year nations from the UK to India opened investigations and pressed X for accountability. In the US, the Take It Down Act criminalizes the nonconsensual publication of intimate images, including "digital forgeries" such as AI-generated deepfakes and requires online platforms to take action. In Europe, the AI Act imposes transparency obligations on certain platforms and AI deployers to label synthetic media.

But alone, these measures are fundamentally reactive. Content moderation, takedowns, and penalties occur only after deceptive material has already spread. In an information space designed for virality, that is often too late. This challenge is particularly acute for democracies that rely on trustworthy information to inform political participation.

To protect access to trustworthy information, democracies need to employ a wider toolkit — harnessing innovation itself for the public interest. A new generation of technical solutions focused on demonstrating authenticity, provenance, and trust, for example, can shift the response from reactive content moderation toward more resilient information spaces by design. Technology developers, in turn, can become essential partners by embedding democratic values such as transparency directly into product design—making authenticity easier to verify and deception harder to scale. To do so, they need a dedicated effort from high-level industry, political, and civic leaders to adopt, red-team, and scale these efforts.

One technical movement provides a strong starting point. Years ago, technologists cautioned about risks to democratic discourse if ‘seeing no longer were believing’. They formed the Coalition for Content Provenance and Authenticity (C2PA) and created a global standard to embed images and videos with metadata that tracks where and by whom that content has been generated and how it may have been edited — from the time an image is captured by a camera or generated by AI to when it appears in a social media feed. The result is a tamper-resistant edit history tied to the content itself — transparency by design.

Today, many of these efforts are being operationalized. Major camera manufacturers as well as the Google Pixel 10 smartphone are embedding this authenticity technology into their cameras so that images and videos taken contain this metadata by default. Software platforms like Adobe’s Photoshop now allow creators to include provenance history in their content — increasingly valuable for recording ownership in an age of copyright confusion. And if an image with provenance metadata is posted to LinkedIn, the platform now displays it, clickable from the photo itself. Transparency is possible at each stage of the content production and consumption cycle.

Because these technologies seek to prove what is real rather than detect what is fake, for their advances to matter, they need to be scaled and become the norm. News organizations should rapidly expand initial deployments of video authenticity and certified broadcasts that protect the integrity of their visual reporting. Today’s unauthenticated content should stick out like a sore thumb.

No one tool alone is a panacea, but the content authenticity movement shows that next generation technologies can embed democratic values directly into their design.

For democratic societies, the path forward lies in pairing public interest guardrails with a proactive innovation offensive, centered on design and deployment. Corporations, philanthropies, governments, and universities should fund competitions, grants, and research scholarships to support infrastructure, entrepreneurs, and next-generation engineers to build technologies with democratic values embedded by design; policymakers can advance clear guidelines and codes of conduct for product development; and governments can partner with the hacker community to ensure that solutions undergo rigorous red teaming to identify vulnerabilities, and are deployed with sensitivity to local contexts.

A pilot project we conducted at the German Marshall Fund demonstrated that technical tools require multistakeholder partnerships and sustained efforts to drive adoption. The India AI Impact Summit declaration echoes this focus, noting that secure, trustworthy, and robust AI is key to unlocking societal and economic benefits and that industry‑driven measures and technical safeguards are central in achieving this goal. With a handful of sessions on deepfakes and one focused specifically on content provenance technologies and the C2PA movement, India’s AI Summit provided initial grounding for this approach. These efforts need to be joined up directly with the innovation and deployment agenda coming out of the Summit. Put simply, coordinated international infrastructure needs to translate responsible AI statements into concrete, interoperable products and standards.

In an age of AI content everywhere all at once, defending democracy demands more than regulation. It requires embedding democratic values directly into technology products. Just as democracies have invested in a free and open internet, they must now invest in the systems and institutions that help citizens discern fact from fiction.

Authors

Lindsay Gorman
Lindsay Gorman is managing director and senior fellow of GMF’s Technology Program where she leads work on the US-China emerging technology competition, AI and democracy, and transatlantic innovation. She is a former senior adviser at the White House Office of Science and Technology Policy and a quan...
Sharinee Jagtiani
Sharinee Jagtiani is a Berlin-based senior officer with GMF Technology. Her work focuses on the geopolitics of technology, particularly the impact of the US-China strategic rivalry on global tech ecosystems, the role of middle powers in this evolving landscape, and the potential of technology to adv...

Related

Perspective
India’s AI Summit Could Prove to be New Delhi's Lost OpportunityFebruary 18, 2026
Analysis
Scrutinizing the Many Faces of Political DeepfakesNovember 17, 2025

Topics