Special Operations Command

Death by A.I.

Ken Klippenstein

Ken Klippenstein

04/22/2026

One can say a lot of things about the rapid and chaotic adoption of artificial intelligence in the American military. But in this context, the term “autonomous warfare” is a euphemism for automated killing. (Autonomous intrinsically means acting independently, governing internally, or operating without external control.)

An unusually frank description of the role AI will play in the future comes from former Joint Special Operations Command chief Gen. Stanley McChrystal (ret.), who compared AI to “infant Hercules,” invoking the Greek god’s own role as a killer even in infancy.

McChrystal says:

“And like the infant Hercules who strangled two snakes sent by Hera to kill him in his crib, AI will grow to be strong — and a part of almost everything we do going forward.”

What McChrystal doesn’t mention is what happened when Hercules grew up. He spent his life carrying out killings on the orders of Eurystheus — a weak, cowardly king who hid in a jar and never had to understand anything. Hercules’s overwhelming power substituted for strategy.

In the most famous of his myths, Hercules fought the Hydra, a monster that grew two heads for every one he cut off. He was strong enough to keep swinging, but sheer power alone only multiplied the problem. He had to change his approach entirely to win. It’s a useful parable for a military that has spent two decades perfecting decapitation strikes only to watch the threats multiply.

Welcome to the era of CombatGPT. The Pentagon has since 2022 used AI in quarterly exercises for “target detection” involving personnel from all six military service branches. The computers pull together the ocean of information pieces that are collected every day — every minute — and extract the most important, according to the AI program, aggregating and geolocating the blips and dots into a potential target.

During the most recent Iran War, Middle East commander Adm. Brad Cooper felt it necessary to give assurances that the use of AI still includes a “human-in-the-loop” to make decisions. “Humans will always make final decisions on what to shoot and what not to shoot, and when to shoot,” he said. That statement, of course, contradicts the very idea of “autonomous.”