Click to Skip Ad
Closing in...

This is what happened when China let AI control a satellite

Published Apr 19th, 2023 5:57PM EDT
secret military spacecraft satellite
Image: JohanSwanepoel / Adobe

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

According to a new report from the South China Morning Post, Chinese researchers are playing things a little loose. In that report, researchers in artificial intelligence discuss how they conducted an AI experiment, giving it control of a satellite camera for 24 hours.

The satellite in question was a small observational satellite known as Qimingxing 1. Further, the AI wasn’t given full control of the satellite’s systems. It couldn’t change the satellite’s orbit or even alter its course. Instead, the AI only had full control of the satellite’s camera.

Researchers found that the AI used the satellite to observe unusual targets during the experiment. One target the AI chose during the satellite experiment was Patna, an ancient city on the Ganges River in India. This location was home to a border dispute between China and India in 2020.

During the experiment, the AI also used the satellite’s camera to observe the Japanese port of Osaka, known for occasionally hosting US Navy vessels, SCMP noted in its report.

threat of AI is growingImage source: AndSus / Adobe

This is the first time AI has been given unbridled control of an observational satellite without being fed any human interaction through prompts or tasks. The use of AI in such an experiment is also considered “rulebreaking,” the researchers say. But, they say they were ready to face whatever consequences it brought.

The entire point of this AI experiment with the satellite was to see how effective AI could be at using China’s various remote-sensing satellites, which often sit unused or have low-value roles. The targeting of military-connected targets does raise some questions, though.

While these AI experiments can provide useful information, they also add to the fears growing around AI. Some are still concerned about the AI threat to humanity, though it is worth noting that this AI cannot think for itself, so wasn’t looking at those targets with any kind of violent malice behind it.

It’s possible the AI in the experiment was trained to look for military targets. Or, it’s possible that it chose those targets for other reasons. Unfortunately, the AI was not designed to explain itself, and the researchers haven’t said what type of AI it even is.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.