Click to Skip Ad
Closing in...

The thing that scares me about generative AI, even more than ChatGPT coming for my job

Published Apr 27th, 2023 10:36PM EDT
Hacker working on a laptop
Image: krisanapong detraphiphat/Getty

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

If you thought “fake news” was a problem during the 2016 and 2020 elections, get ready. You haven’t seen anything yet, thanks to the generative AI era that we’re in now and the Wild West of content creation looming on the horizon as ChatGPT and other burgeoning alternatives quickly render obsolete an already half-dead old maxim. You know, the one about how “seeing is believing.”

For now, we’re still in the early days wherein much of the news coverage of what generative AI is now capable of is still focused on novelty. Like, the way a computer can generate a convincingly Drake-sounding song from scratch — words, music, and all. Or produce a double-take-inducing image of Pope Francis, seemingly wearing a stylish, white puffer jacket. Examples like those make for hard-to-resist copy. Less so, the hand-wringing from the media about how AI is coming for our jobs.

What about when it comes for our elections and political leaders, though?

In recent days, a Republican ad maker pitched his firm to a Senate candidate to try and win some business during the upcoming election. As it turns out, though, someone else had beaten that adman to the punch, and included some AI razzle-dazzle in what this Senator was shown (specifically, the use of AI to reproduce the Senator’s voice). “The candidate thought it was so cool,” the businessman lamented to Vice.

“I was like, ‘F**k, I didn’t know you could do that.’”

The crucial point to understand is that this is no longer in the realm of the theoretical or the stuff of future tense. The day when generative AI produces, say, a campaign ad or some other piece of content that can swing an election might already be here. In fact, the Republican National Committee has just debuted the first campaign ad produced entirely by AI — and, while the quality is a little high school AV club, the potential is clearly there.

Shady groups that support a candidate on either side of the ideological divide are absolutely going to have a field day with this, never mind that the RNC promised in response to the new ad that it won’t use AI for deceitful purposes. But what happens at the grassroots, individual level? Or when a Russian troll farm starts having fun with this technology? The resulting infowar will be enough to make some of you pine for the days when all a fringe candidate had to do to win the White House is make targeted ad buys on Facebook.

If you ask me, it sort of feels like the dystopian dreamweavers at OpenAI who launched ChatGPT are arguably performing gain-of-function research on mankind. Somebody was always going to invent it, they’ll tell you, and better that we do it so we can control it and develop it right. Which is, of course, the kind of absurdity you could only believe if your brain was made of rocks. Because here’s just a taste of what’s coming:

“That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” NSA cybersecurity director Rob Joyce said during his “State of the Hack” presentation at the RSA security conference in San Francisco this week. It was part of a larger warning about how no one is eventually going to be able to tell what’s real or artificial.

“It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test … So that right there is here today, and we are seeing adversaries, both nation-state and criminals, starting to experiment with the ChatGPT-type generation to give them English language opportunities.”

ChatGPT homepageImage source: Stanislav Kogiku/SOPA Images/LightRocket via Getty Images

And, back to politics, here’s something else to think about. If generative AI is so good that we can’t tell what’s real or artificial anymore, there’s also a non-technical side to that dilemma: When the deepfakes get to be too good, how will the next Trump-grab-’em’-by-the-you-know-what be received by the public then? When it’s that wild, is it too wild to be believable? Isn’t it easier for a candidate to just blame a gaffe on a deepfake? It’s not like you’d be able to tell the difference.

Don’t hold your breath about any of this, by the way. It is going to happen. The next October surprise is going to be so good, in fact, that you don’t even realize it is one.

Andy Meek Trending News Editor

Andy Meek is a reporter based in Memphis who has covered media, entertainment, and culture for over 20 years. His work has appeared in outlets including The Guardian, Forbes, and The Financial Times, and he’s written for BGR since 2015. Andy's coverage includes technology and entertainment, and he has a particular interest in all things streaming.

Over the years, he’s interviewed legendary figures in entertainment and tech that range from Stan Lee to John McAfee, Peter Thiel, and Reed Hastings.