When ChatGPT debuted in November of 2022, the AI chatbot quickly took the tech world by storm. ChatGPT, for those unfamiliar, employs deep learning methods to deliver human-like responses to almost any query you can dream up. It can write short stories, answer math problems, write code, provide advice, and even pass a Wharton MBA exam. It’s seemingly AI on steroids and, as a result, many believe it’s a legitimate threat to Google’s longstanding search monopoly.
In light of the above, it’s perhaps not surprising that Microsoft recently invested $10 billion in OpenAI, the company behind ChatGPT. The prevailing thought is that Microsoft plans to integrate the technology into its Bing search results.
Is the ChatGPT hype unwarranted?
One of ChatGPT’s strengths might arguably be one of its weaknesses. Because ChatGPT can handle almost any type of query you throw its way, it’s not exactly specialized to any one type of problem. Consequently, some believe that ChatGPT’s versatility results in mediocre, and sometimes misleading, results across the board.
Meta chief AI scientist Yann LeCun, for instance, recently opined that ChatGPT “is not particularly innovative” or revolutionary.
“It’s just that, you know, it’s well put together, it’s nicely done,” LeCun added.
More entertaining than accurate?
More to the point, Princeton Computer Science professor Arvind Narayanan recently articulated that ChatGPT is a “bulls**t generator.”
In an interview with The Markup — which is well worth reading in its entirety — Narayanan explains that ChatGPT excels in producing answers that, on the surface, sound believable. A closer examination, however, leaves a lot to be desired.
“We mean that it is trained to produce plausible text,” Narayanan said. “It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal.”
Having played around with ChatGPT extensively, Narayanan’s statement struck a chord with me. For instance, I had ChatGPT write a short story using the characters from the TV show Friends set in the world of HBO’s The Wire. The result was nothing short of hilarious. The short story was written well enough, and as Narayanan notes, the dialogue was plausible. All the same, plausible isn’t exactly a high bar.
What’s more, I threw some basic Algebra questions at ChatGPT. Sometimes, it churned out the correct answer. Other times, it was way off base.
To this end, Narayanan writes:
There are very clear, dangerous cases of misinformation we need to be worried about. For example, people using it as a learning tool and accidentally learning wrong information, or students writing essays using ChatGPT when they’re assigned homework.
This shouldn’t be taken to mean that ChatGPT is nothing but smoke and mirrors. On the contrary, some of the things ChatGPT can do are certainly impressive, if not astounding. All the same, it’s important not to get swept away by the hype and start drumming up doomsday scenarios, as some have done. As it stands now, there’s no reason to think that ChatGPT, in the near future, will put people like developers and writers out of work.