Military AI: Lessons from Ukraine

Gulsanna Mamediieva / Mar 20, 2025

Elise Racine / Better Images of AI / Moon over Fields / CC-BY 4.0

The rapid integration of artificial intelligence (AI) into military operations has sparked debates about the role of autonomy in modern warfare. Ukraine’s experience offers a unique case study—while often portrayed as an AI-driven conflict, the reality is far more complex. AI plays an increasing role in Ukraine’s defense strategy, but genuine autonomy—where machines make independent battlefield decisions—remains out of reach. This article explores how AI is currently being used in Ukraine’s military operations, the challenges of achieving full autonomy, and the broader implications for defense innovation, ethical considerations, and global security.

The war in Ukraine is frequently mischaracterized as a showcase for AI-powered warfare, but this portrayal oversimplifies reality. While Ukraine has rapidly integrated AI-enabled technologies into its defense sector, warfare itself is far from AI-driven. The primary objective is to minimize human exposure to direct combat through the deployment of unmanned systems. Necessity drives this approach—Ukraine must conserve personnel, manage cognitive overload, and enhance operational effectiveness.

Despite recent advancements in AI, fully autonomous systems that can operate in unpredictable environments with minimal human intervention have yet to materialize on the battlefield. AI technology is not mature enough for such applications, and in many cases, there is no clear legal or policy distinction between “autonomous” and “unmanned” systems. As a result, the term “autonomous” is often applied to platforms that, in reality, rely on pre-programmed automation rather than independent decision-making.

That said, ignoring AI’s growing role in warfare would be a mistake. AI increasingly enhances specific military functions, such as analyzing drone footage, recognizing targets, and improving last-mile navigation. This evolution is already reshaping military operations — not by replacing human decision-making so far, but by augmenting it, increasing speed, accuracy, and efficiency. These advancements should already prompt global policymakers to take action. While the war in Ukraine is not yet become an AI war, it is laying the groundwork for the future military applications of AI. Understanding this transition is critical to anticipating the ethical, strategic, and technological challenges ahead.

The development of military AI in Ukraine

Ukraine’s military AI development did not start with the full-scale invasion in 2022 but can be traced back to grassroots initiatives following Russia’s annexation of Crimea in 2014. During this period, volunteer groups and private-sector innovators filled gaps left by an underfunded military, developing key technologies such as situational awareness systems and drones for intelligence, surveillance, and reconnaissance (ISR).

By 2022, the Ukrainian defense sector faced an urgent need for advanced technology. The existential threat from Russia forced the rapid adoption of AI-driven systems on the battlefield. The government shifted from a passive observer to an active facilitator, promoting the deployment of commercial AI solutions and fostering partnerships with private companies. However, despite these advancements, Ukraine still lacks a cohesive, long-term strategy for fully integrating AI into its defense forces. To achieve this, Ukraine needs to create a comprehensive roadmap in collaboration with strategic partners and allies.

AI is already transforming multiple aspects of Ukraine’s military operations:

  • Unmanned systems – Aerial and ground-based drones are evolving from reconnaissance tools to strike platforms, with increasing levels of autonomy.
  • Autonomous navigation – Advances in GPS-denied navigation and drone swarming techniques are enhancing operational capabilities.
  • Situational awareness, command, and control – AI-powered platforms provide real-time intelligence and decision-making support, improving battlefield strategy.
  • Damage analysis and assessment – AI assists in post-strike damage analysis, aiding recovery planning and estimating costs.
  • Demining – AI-powered demining technologies are increasing the efficiency of landmine clearance operations, reducing risks to soldiers and civilians.
  • Training and simulation – AI-driven simulations create realistic combat scenarios tailored to soldiers’ learning needs, providing adaptable military trainings.

Institutional adaptation and policy framework

Initially, AI applications in Ukraine’s military were spearheaded by private companies and volunteer groups. However, as AI’s potential became clear, the Ukrainian government established specialized divisions to institutionalize military innovation. The Armed Forces of Ukraine, the Ministry of Digital Transformation, and the Ministry of Defense have launched key initiatives like the Center for Innovation and Defense Technologies (CIDT) and the Unmanned Systems Forces, aiming to integrate AI into the national defense strategy.

To accelerate AI adoption, Ukraine implemented policy frameworks that streamline procurement, fast-track AI product approvals, and promote public-private partnerships. These policies align military objectives with technological advancements while fostering a business-friendly regulatory environment that encourages innovation.

Programs like Brave1 Defense Tech Cluster and Army of Drones serve as incubators for AI-driven military solutions, fostering collaboration between startups, defense firms, and military users. This innovation model enables rapid testing and deployment of AI applications in real combat scenarios.

However, Ukraine faces challenges in sustaining AI research and development over the long term. Current initiatives are largely reactive, focused on addressing immediate battlefield needs rather than long-term strategic planning, which should also consider the implications for international humanitarian law and fundamental human rights. To build a resilient AI ecosystem, Ukraine must strike a balance between short-term wartime priorities and a broader vision for defense technology.

Ethical considerations and AI governance

Ukraine has embraced AI to address battlefield challenges while maintaining human oversight over lethal decision-making. As AI becomes more integrated into military decision-making, addressing questions concerning autonomy, human control, and accountability is becoming more pressing.

Ukraine prioritizes a human-in-the-loop approach, ensuring that AI systems support, rather than replace, human decision-making in military operations. However, rapid developments in military AI suggest that full autonomy is closer than ever, raising urgent global legal and regulatory challenges. Key concerns include:

  • Accountability – If an AI-driven system causes civilian harm, who is responsible? The operator, the developer, or the government?
  • Bias and targeting errors – AI relies on training data that may be incomplete or biased, potentially leading to misidentifications.
  • Escalation risks – AI-enabled weapons could react faster than human operators, increasing the risk of unintended conflict escalation.

As AI governance evolves, countries must work toward alignment with international law to establish clearer rules for AI in warfare, taking into account that collaborative research, data-sharing frameworks, and joint military technology initiatives are already underway. For example, ensuring human oversight in AI warfare requires at least three critical safeguards:

  • Human overrides – AI-generated outputs must be reviewable and reversible if they violate ethical or legal principles.
  • Clear responsibility lines – establishing accountability for AI-driven decisions to prevent legal ambiguity.
  • Human judgment in critical operations – preserving human oversight at pivotal moments, particularly in lethal decision-making.

Ukraine has partnered with Western tech firms like Palantir and Microsoft to accelerate AI integration into defense operations. However, relying on foreign AI solutions raises concerns about long-term sovereignty and security.

To ensure ethical and responsible AI development, Ukraine and its allies must:

  • Develop joint AI research and development initiatives that balance innovation with ethical safeguards.
  • Strengthen AI safety and compliance frameworks to prevent irresponsible use and escalation risks.
  • Establish international legal norms governing autonomous weapons and AI in warfare.

Conclusion

The war in Ukraine is not yet AI-driven, but it is shaping the future of AI in warfare. Military forces are increasingly integrating into operations to enhance efficiency, intelligence, and strategy. However, achieving full autonomy remains a distant goal, constrained by technological, legal, and ethical considerations. Ukraine’s experience provides a crucial case study in the rapid integration of AI in defense. The lessons learned today will influence global discussions on military AI, impacting battlefield strategies, ethical debates, and international security policies for years to come.

Authors

Gulsanna Mamediieva
Gulsanna Mamediieva is a Tech and Public Policy Fellow, Better Governance Lab Fellow, and Adjunct Professor at the McCourt School of Public Policy, Georgetown University. She is a recognized leader in digital transformation and tech policy and has over a decade of experience shaping digital governme...

Topics