Will the military weapons of the future be killer robots, essentially? Recent comments from a US Army official might spark such concerns, given that he acknowledged the army is very much looking into the development and use of completely autonomous weapons that don’t need human operators.

At a Defense Writers Group breakfast a few days ago in Washington DC, the Army’s Assistant Secretary for Acquisitions, Logistics and Technology Bruce Jette argued that when it comes to operating weapons, the human factor could increasingly prove dangerous by its very existence. If everyone else, in other words, is going all-in on the development of AI-controlled weapons, it will be dangerous for us to still rely heavily on the human-operated equivalent which would be easily outmatched by machines, Jette argued.

“People worry about whether an AI system is controlling the weapon, and there are some constraints on what we are allowed to do with AI,” Jette said at the breakfast on Jan. 10. The problem with that, he said, has to do with reaction time. Indeed, that time itself can be thought of as a kind of weapon.

“If I can’t get AI involved with being able to properly manage weapons systems and firing sequences then, in the long run, I lose the time deal,” he said.

“Let’s say you fire a bunch of artillery at me, and I can shoot those rounds down, and you require a man in the loop for every one of the shots. There are not enough men to put in the loop to get them done fast enough … So how do we put not just the AI hardware and architecture and software in the background? How do I do proper policy so we [make sure] weapons don’t get to fire when they want and weapons don’t get to fire with no constraints, but instead we properly architect a good command-and-control system that allows us to be responsive and benefit from the AI and the speed of some of our systems?”

As part of preparations being made along these lines and deeper academic consideration of the issue, Jette’s office is working with a new Army Futures Command that’s been put together to study AI-related battlefield uses. In fact, the command has also set up an AI center at Carnegie Mellon University.

For now, Jette is trying to sound an alarm that the US is potentially leaving itself open to fighting on the battlefield with one hand tied behind its back. These kinds of discussions certainly tend to raise fears of killer robots and evil AI systems taking over — the stuff of science fiction movies — though the US military has argued for embracing AI. Indeed, with some of its leaders insisting that the country can’t effectively compete on a military basis against adversaries like Russia and China without it.

However, according to the Military.com news site, “Concern over placing machines in charge of deadly weapons has prompted military officials [for now] to adopt a conservative approach to AI, one that involves a human in the decision-making process for the use of deadly force.” Which is why Jette stressed during his remarks that “allowing artificial intelligence to control some weapons systems may be the only way to defeat enemy weapons.”