In the Ukrainian theatre AI-enabled drones are revolutionising the landscape. Target engagement rates went from 10–20% to roughly 70–80%. The figure comes from Bondar (2025), a CSIS field study drawn from Ukrainian datasets and operator interviews. Kyiv now intends for half of the drones procured in 2025 to carry AI guidance, up from less than 0.5%. Humans still designate targets; AI takes the last 100 to 1,000 metres. War is being rebuilt around machine-speed perception, while the institutions meant to govern it remain calibrated to slower decision cycles.
The shift is global. Mishra et al. (2025), in a Belfer Center white paper, note that worldwide military AI spending doubled from USD 4.6 billion in 2022 to 9.2 billion in 2023, and is projected to reach 38.8 billion by 2028. On lethal autonomous weapons, 129 states advocate a binding framework, twelve oppose it, fifty-four have not declared. The US, UK, China and India each issue principles for responsible military AI, but the convergence is nominal. Beneath shared vocabulary sit divergent doctrines.
For European readers, that divergence is concrete. Genini (2025), in the Maastricht Journal of European and Comparative Law, argues that the war in Ukraine and the second Trump administration have catalysed unprecedented integration of the EU defence industrial base, channelled through SAFE, EDIS, and a 65% buy-European clause. The architecture is framed as strategic autonomy, but it forecloses formal cooperation with NATO on a wider single market; supranational momentum continues to clash with member-state reluctance. The ethics remain unsettled. Miller (2025), in Ethics and Information Technology, argues that human-out-of-the-loop systems should be prohibited, while some on-the-loop systems may be acceptable under restrictive conditions, the responsibility gap is mentioned and seen as best addressed only through collective moral responsibility.
What ties these accounts together is a critique of speed itself. Baggiarini (2024), in the Australian Journal of International Affairs, argues that AI-enabled resort-to-force decisions undermine democratic legitimacy not because algorithms are opaque, but because algorithmic reason conceals through in-visibility, anonymity, and fragmentation. The form of knowing AI imposes on war is structurally indifferent to the deliberative tempo democracies require. Kellner (1999), reading Virilio in Theory, Culture and Society, framed this lineage three decades ago: military technology becomes the matrix in which human practice unfolds, and the speed of the war machine erodes the political time within which the question of war is asked.