Why having “humans in the loop” in an AI war is an illusion. We don't really understand AI's inner workings, so we're effectively flying blind.
The debate over ‘humans in the loop’ is a comforting distraction. The immediate danger isn’t that machines will act without human oversight; it’s that humans have no idea what the machines are actually 'thinking.'
Imagine an autonomous drone tasked with destroying a munitions factory. It reports a 92% probability of mission success because secondary explosions will thoroughly destroy the facility. But to the AI, maximizing disruption meets its objective, potentially violating rules regarding civilian life.
The “intention gap” between AI systems and human operators is why we hesitate to deploy frontier black-box AI in civilian health care or air traffic control, yet rush to use it on the battlefield. This means the use of increasingly autonomous—and opaque—AI decision-making in war is only likely to grow.







