Generative AI Goes To War
And Iran is discovering what it’s like to fight without it
The Iranian pilot never saw the missile that killed him.
His Russian-built Yakovlev Yak-130, a jet trainer first flown in 1996 and introduced into service in 2010, was flying over Tehran when the aircraft simply disintegrated. The explosion appeared sudden, inexplicable. There was no visible attacker. Just a flash, then debris falling through the sky.
The same thing happened hundreds of kilometers away to the sailors aboard an Iranian naval frigate returning through international waters. One moment the ship was underway. The next it was torn open by a torpedo launched from a submarine no one ever saw.
The weapons themselves were not extraordinary. A missile from an Israeli F-35I “Adir”, the Israeli version of the Lockheed Martin F-35 Lightning II. A torpedo fired from a submerged American submarine.
What mattered was the system behind them.
The F-35 is often described as a fighter jet. In reality it is a flying sensor network — a data center with wings and bombs. It collects data from its radar, infrared sensors, satellites, and other aircraft, fuses the information, identifies targets, and presents the pilot with recommended actions. The human still authorizes the shot, but the machine increasingly decides everything that leads up to it.
For the Israeli pilot, the kill was incidental. The aircraft’s systems flagged the target, calculated a firing solution, and suggested engagement. The pilot approved. The missile launched and guided itself. The pilot then moved on to his main target over Iran.
Iran is discovering what it looks like to fight a war against militaries that are quietly integrating generative AI into the machinery of combat.
Despite their public spat, the United States military is integrating large language models such as Anthropic’s Claude into the Pentagon’s Project Maven intelligence platform. Israel operates its own AI-assisted targeting systems developed during years of conflict with Hamas and Hezbollah.
The objective is identical in both militaries: compress the time between information and action.
Modern military systems now ingest enormous flows of information: drone feeds, satellite imagery, intercepted communications, logistics movements, radar tracks. AI systems process that data continuously, identify patterns, and recommend targets.
Human commanders still approve strikes, for now. Against an adversary that does use AI in war (like Russia and China), that “human in the loop” decision could change. We are entering the age of autonomous weapons of war.
Even small product features in commercial AI hint at the direction of travel. Anthropic recently announced an experimental “auto mode” for its coding tools that allows Claude to run extended tasks without constantly requesting permission from developers. In software development that eliminates interruptions. In warfare it reduces pauses in analysis.
But the battlefield is fighting back
The problem is that the battlefield itself is becoming hostile to data.
Electronic warfare now saturates modern conflicts. Jamming, spoofing and signal denial are constant features of the environment. Systems dependent on GPS navigation or external communications often fail once those signals disappear.
The war in Ukraine show that drones that function perfectly during testing frequently fail once exposed to sustained electronic warfare. GPS signals vanish. Command links collapse. Guidance systems malfunction.
Autonomous systems therefore need to navigate and operate independently — using onboard sensors, cameras and AI models rather than external signals.
But that autonomy introduces another vulnerability. Machines depend on data. And data can be manipulated.
Poisoning the machine
Artificial intelligence introduces a new domain of warfare: data integrity.
The same generative AI tools that help Western militaries process intelligence can also be used by adversaries to corrupt it.
Future conflicts may involve:
• synthetic communications traffic
• fabricated satellite imagery
• manipulated sensor feeds
• adversarial attacks designed to confuse targeting algorithms
• poisoned datasets inserted into intelligence systems
A machine trained on corrupted data can draw the wrong conclusions with perfect confidence. Instead of hiding from sensors, adversaries may increasingly attempt to feed those sensors misleading information. The battlefield becomes not only contested but deceptive.
Two war machines
The result is a collision between two fundamentally different models of warfare.
The American-Israeli system is built on information dominance. Sensors collect vast quantities of data, software platforms fuse the streams, and algorithms help commanders decide what to strike. Its strength is speed and precision.
The opposing model is designed to make that system blind.
Instead of advanced networks, it relies on concealment, decentralization, tunnels, cheap rockets and human couriers. Its objective is not technological superiority but opacity.
High-tech militaries attempt to map the battlefield. Asymmetric adversaries attempt to make the battlefield unreadable.
The next phase of AI war
For now the technological advantage remains firmly with the United States and Israel. Our systems process intelligence faster, identify targets more efficiently, and coordinate complex operations with unprecedented speed.
But the decisive contests of the next decade may not be fought only with better algorithms. They will be fought over the integrity of information itself.
Future wars may hinge on which side can poison the other’s data, manipulate its sensors, or deceive its machine-learning systems. In earlier eras the key question of warfare was who possessed the most powerful weapons.
In the era of artificial intelligence the question may become something more subtle.



