Click to Skip Ad
Closing in...

Researchers claim GPT-4 passed the Turing test

Published Jun 14th, 2024 6:36PM EDT
Open AI's ChatGPT start page.
Image: Jonathan S. Geller

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

OpenAI’s GPT-4 has become the first AI to pass the Turing Test. At least, that’s what a group of researchers claim in a new study. The study, which is currently available on the preprint server arXiv, has yet to be peer-reviewed. Still, the results here are intriguing, to say the least.

The Turing test, which was first proposed by Alan Turing in 1950, seeks to judge whether a machine can show intelligence well enough to make it indistinguishable from a human. In order for an AI to pass the Turing test, it must be able to talk to someone and fool them into thinking that they are talking to a human.

To check whether or not GPT-4 could pass the Turing test, the researchers involved with the paper asked 500 people to speak with four different respondents. One respondent was human, another was a 1960s-era AI called ELIZA, and then the final two respondents were powered by GPT-3.5 and GPT-4.

Each conversation lasted a total of five minutes. According to the paper, which was published in May, the participants judged GPT-4 to be human a shocking 54 percent of the time. Because of this, the researchers claim that the large language model has indeed passed the Turing test.

Robot with a computerImage source: sdecoret/Adobe

On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time. As such, GPT-4’s Turing test results are intriguing.

Of course, there is plenty of concern over whether or not the Turing test is too simplistic of an approach. It was designed to determine a machine’s intelligence. However, raw intellect doesn’t play that large of a part in fooling humans into thinking that they are speaking with another human.

Instead, the AI has to be able to mimic the socio-emotional factors humans rely on during their interactions. This news, of course, will likely cause more growing concerns over the dangers of AI, which even the Godfather of AI has expressed worry about.

Ultimately, this study and GPT-4’s Turing test results highlight just how much AI has changed during the GPT era, as well as how humans are approaching AI.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.

\