It’s a common theme in science fiction – machines rising up against their human masters. But it could be a real threat, warn researchers at the recent World Economic Forum. Unlike today’s drones, which are still controlled by human operators, autonomous weapons could potentially be programmed to select and engage targets on their own.
“It was one of the concerns that we itemized last year,” Toby Walsh, professor of artificial intelligence (AI) at the school of computer science and engineering at the University of New South Wales, told FoxNews.com.
“Most of us believe that we don’t have the ability to build ethical robots,” he added. “What is especially worrying is that the various militaries around the world will be fielding robots in just a few years, and we don’t think anyone will be building ethical robots.”
Noted science fiction author Isaac Asimov famously penned the “Three Laws of Robotics,” which offered that “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
Related: America’s new high-tech aircraft carriers are more important than ever, experts say
Such rules, researchers agree, would be necessary for any “ethical robot,” but it would be up to its builder to ensure that those ethics were programmed into it. However, in some regards this may be jumping the gun.
“For the most part weapons like this don’t exist today,” Paul Scharre, senior fellow and director of the Ethical Autonomy Project at the Center for a New American Security, told FoxNews.com. “Most systems are still fire and forget and even the advanced systems are designed not to choose a target, but to correct to hit the target.”
Scharre, who presented a press note at the World Economic Forum, highlighted the fact that the laws of war do not inherently prohibit autonomous weapons, but he warned that it would be very challenging for such devices to comply with accepted rules of engagement. Even if that were possible, it is still generally agreed that autonomous weapons present serious moral and ethical challenges.
There are also technological concerns.
Related: The evolution of military camouflage
“Supposing that we are able to build ethical robots that follow rules of law, there is that argument that this could be a good thing,” Scharre told FoxNews.com. “Ethical machines wouldn’t commit atrocities, for example, but that could be outweighed by the concern that that we still can’t build any system that can’t be hacked.”
Today’s Remote-Controlled Systems
While no nation has actually deployed autonomous weapons, several nations, including the United States, have employed the use of unmanned vehicles into combat situations. The U.S. has utilized drones – or Unmanned Aerial Vehicles (UAVS) – not only for surveillance but also as a way to remotely target and kill suspected terrorists.
The danger has been that UAVs may not be completely secure. In 2011 the Iranian military claimed to have “hijacked” an RQ-170 spy drone. While U.S. officials dispute the claim, it is just one incident that highlighted the threat of a remote device being hacked.
“There have been efforts to harden the data link’s encryption to make the connection with the operator more secure,” Huw Williams, editor of IHS Jane’s International Defence Review, told FoxNews.com. “It remains a concern, no encryption is perfect and there is still the danger that a data link can be broken. The security level is incredibly high, but there also other efforts that can be used to disrupt the control of a remote system.”
This danger could rise as the systems become more automated, even if the platforms aren’t entirely autonomous.
Related: Happy centennial to the tank
“We’re going to see more AI in remote systems including drones,” Michael Blades, senior industry analyst for aerospace and defense at Frost & Sullivan, told FoxNews.com. “The sensors include LIDAR, radar, video and even acoustic sensors and these are getting more advanced. It isn’t much different from what we are seeing in autonomous cars right now.”
Moreover, just as there have been cases of connected and autonomous vehicles being hacked, there are now entirely legitimate concerns that someone could hack into unmanned weapons systems.
“We’re in the early stages of an arms race, where countering efforts are now countered to ensure that remote vehicles can’t be taken over,” Blades told FoxNews.com. “Everything has encrypted data links, but the threat is still there. When the first drones were flying over Iraq and Afghanistan it was very possible for those being watched to easily access the video feeds. The data links are hardened but we’re in an arms race now as others try to access it.”
Hacking isn’t the only concern.
Related: Historic aircraft carriers in pictures
“The security on today’s remote systems is incredibly high, so it isn’t something just anyone can do,” Williams explained. “But there could be efforts to disrupt them, to jam the controls, jam the GPS or otherwise disrupt the control of an aircraft through other means.”
Loitering Munitions
The first step towards truly autonomous weapons in actual use could be Israel Aerospace Industries’ Harop, a more advanced version of its Harpy system. It has been compared to a hawk in that it will circle an area and wait to strike its prey.
As a form of loitering munitions, it can be launched like a missile and flies towards a target, albeit with a human operator watching to determine whether it should strike. Its advantage is that it can loiter in the target area for up to six hours, waiting for the target to present itself. Whereas the Harpy was designed to primarily target anti-radar systems, the Harop can be used to strike at vehicles and other objects on the ground.
“This is a type of weapon that could choose its own targets by following a program,” Scharre told FoxNews.com. “But it can get very slippery. A self-driving car by comparison is about navigating traffic to keep the passengers safe but it isn’t making decisions on its own.”
Related: Cold War-era weaponry in pictures
At present the machine still isn’t the one taking the kill shot.
“On the one hand having the machine wait could allow for more targeted strikes to lessen collateral damage, so instead of using a Hellfire missile a smaller weapon could be used; one can wait around for the target to be out in the open,” added Blades. “And the kill shots still may not be given to a machine, at least not by anyone in the civilized world. However, it isn’t hard to see how a terrorist or other rogue actor wouldn’t let the machine make its own decisions.”
by Peter Suciu
More from FoxNews.com Tech: