Click to Skip Ad
Closing in...

OpenAI tech will power killer drones – how scared should we be?

Published Dec 5th, 2024 7:43PM EST
Anduril's Bolt-M drone.
Image: Anduril

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

If you’re worried about AI as it relates to our impending doom, you won’t like the news that ChatGPT tech will power swarms of killer drones from Anduril. On Wednesday, Anduril announced the partnership with the most talked-about US AI company.

Many people will also be quick to observe OpenAI’s rather abrupt pivot toward the dark side. Earlier this year, a string of departures from the company made us question its commitment to developing safe AI. OpenAI decided to renounce its non-profit beliefs and chase profits like most AI firms.

Partnering with a defense contractor would further make any AI skeptic worry about OpenAI’s once-noble intentions about the future of artificial intelligence.

I have to say I do share some of those views. I don’t think AI will bring the end of the world in the near future, but I’m aware of the various risks different versions of AI pose to society. Giving the AI access to any sort of weapon is one of those scenarios. Things can go wrong, especially with nascent tech like AI.

However, there’s also a sense of relief in this. Make no mistake, some AI firms will get into the military game, whether it’s OpenAI, Google, Anthropic, or a different entity. And it probably needs to happen sooner or later. The advantage of hearing that AI companies in the Western world are doing it is exactly that: We’re hearing about it.

I’m sure that less democratic nations working on their own AI-powered robot armies will be conducting similar AI experiments if they’re not already doing it. And we won’t necessarily find out it’s happening before the fact. Not to mention that the Russia-Ukraine war has proven just how important and lethal drone warfare can be, and that was well before ChatGPT became viral.

The involvement of AI in warfare is inevitable, no matter how much we’d like to pretend otherwise. There’s probably already a race to have AI improve various aspects of the military before enemies can have similar systems in place.

Before we start getting worried about the AI wars of the future, we’ll just have to wait and see what Anduril and OpenAI share next, assuming they are willing to share anything about their joint work on AI drones.

For the time being, the announcement uses exactly the kind of language we’d expect from such a partnership. The drone maker says the strategic partnership with OpenAI will help it “develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions.”

How will it work? Well, Anduril gives us a basic idea of what it’ll do with OpenAI’s ChatGPT-like tech:

The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time. As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril’s industry-leading library of data on CUAS threats and operations, will help protect US and allied military personnel and ensure mission success.

Anduril also mentions the “accelerating race between the United States and China to lead the world in advancing AI” as a reason to seek the help of organizations like OpenAI.

Sam Altman, the CEO of the ChatGPT creator, offered a statement filled with reassuring language about OpenAI tech used in the military sector:

OpenAI builds AI to benefit as many people as possible and supports U.S.-led efforts to ensure the technology upholds democratic values. Our partnership with Anduril will help ensure OpenAI technology protects US military personnel and will help the national security community understand and responsibly use this technology to keep our citizens safe and free.

Then again, Altman is responsible for making many former high-ranking ChatGPT developers in charge of reining in the AI leave the company. You have to keep that in mind whenever he talks about safe AI, especially safe AI for the military.

As for the killer robots that OpenAI’s models will power, Gizmodo notes that most of them are defensive drones developed to protect US service members and vehicles.

However, Anduril also made a Kamikaze drone called Bolt-M (top photo) that features “lethal precision firepower” and is capable of “devastating effects against static or moving ground-based targets.” That drone is powered by the company’s AI. In the future, OpenAI’s AI tech might also play a role in that sort of offensive drone tech.

The video below shows Bolt-M in action in various roles, including striking an objective:

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it's available.