The Data Center Goes to War
Napoleon famously remarked that an army marches on its stomach. Today, it marches on its servers.
The modern military faces a trilemma: it must build massive data centers to power AI-driven warfare; it must fortify these static, stadium-sized targets against attack; and it must simultaneously push “edge” computing into the field, fitting the power of a warehouse into a tank and even an infantryman’s rucksack.
Civilian data centers process unicorn selfies. Military ones power the brain of the army: navigation, satellite imagery, secure comms, and targeting. Take them offline, and the modern military goes blind. No precision strikes. No coordination. Like horse cavalry charging machine guns.
The October 7 War exposed Israel’s reliance on foreign munition supply lines. But defense planners worry about a more subtle dependency. If an army loses the ability to crunch data—for navigation, code-breaking, targeting, and logistics—it is not merely slowed; it is lobotomized.
How To Attack a Data Center
The first problem is that data centers are easy to kill. They are usually hardened against missiles, buried in bunkers or reinforced concrete. But you don’t need to blow up a brain to kill it; you merely need to cut off its oxygen.
For a server farm, that oxygen is electricity and cooling. High-performance chips generate immense heat. Cut the municipal water supply or disable the circulation pumps, and processors throttle within minutes; within an hour, the facility goes dark. Sever the power lines or the fuel supply for backup generators, and computation ceases. Even if the lights stay on, cutting the fiber-optic cables turns a multibillion-dollar strategic asset into an isolated concrete mausoleum.
Thinking inside the box
The second constraint is physics. A tank commander under fire cannot afford the latency of the cloud. Sending sensor data to a distant bunker, waiting for an algorithm to process it, and receiving targeting data back takes milliseconds that the battlefield does not offer.
The solution, currently being pioneered by firms like Oracle in conjunction with Israel’s defense establishment (as well as startups like Poolside), is to invert the architecture. Rather than streaming data out, the models are deployed in. “Edge AI” puts the brain directly into the tank, the drone, or the soldier’s gun sights. The AI arrives in pre-trained, ready to execute decisions without phoning home.
The New Nightmare: Model Poisoning
This solution, however, introduces the most terrifying vulnerability of all: trust. Centralized models are easier to guard; distributing them across thousands of combat platforms creates thousands of attack surfaces.
The nightmare scenario is “model poisoning.” If an adversary can subtly corrupt the training data, they can turn an army’s strengths into liabilities. A tank’s targeting AI might be taught to ignore the thermal signature of a specific enemy tank, reporting “all clear” while an ambush prepares to strike. An autonomous supply convoy could be fed a corrupted navigation model that confidently routes it into a kill zone, all while displaying a safe green path on the dashboard.
Worse is the prospect of systematic friendly-fire, when the friend-or-foe identification algorithms are corrupted to only engage friendly forces. Drone swarms and artillery batteries could fire on their own troops faster than human operators can intervene.
The new arms race is therefore not just about building bigger bombs or autonomous killer robots, but also about verification. The challenge is to prove that a model has not been given a sleeper command, designed to trigger only under specific conditions. In the wars of the future, soldiers’ lives will depend on the integrity of the code in their weapons.



