Before the arrival of ChatGPT, I’ve had this running joke with friends that we actually don’t know how to do many trivial things, and we’re overly reliant on googling for information. Should the internet go down for extended periods, we’ll be in a world of trouble. The same thing still applies now that AI is here; we’re still searching the web at all times for information about everything, including doing stuff. But some people might be facing another issue they’re not even aware of now that they’ve started using AI.
A new study says the use of AI programs like ChatGPT at work might lower critical thinking skills in some people. ChatGPT and Gemini won’t exactly make us dumber, just like relying on Google doesn’t make us dumber. It can, however, make us overly reliant on artificial intelligence help, and that might have adverse effects on our ability to process information critically.
Microsoft and Carnegie Mellon University researchers published their findings on how generative AI impacts critical thinking skills.
The scientists asked 319 knowledge workers to self-report details about the way they use AI tools at work. These are employees who handle all sorts of data at work, whether it’s writing an email or marketing copy, creating images or presentations, or writing code.
The knowledge workers had to report tasks they were asked to do and detail how they used AI tools to complete them. The subjects also reported whether they engaged in critical thinking when handling the tasks with the help of AI. They also reported their confidence in the work AI assisted with and their ability to perform the same task without AI use.
While the test subjects came from different fields and used AI for different activities, the researchers were able to determine clear patterns of AI use at work in relationship with critical thinking.
The more a person trusted the AI’s ability to complete the task, the more likely they were to trust the output without employing any critical thinking. Also, the Microsoft and Carnegie researchers found that the easier the task the AI had to handle, the less likely the human would engage in critical thinking.
Participant P275 had to use AI to rephrase text, and they did it using ChatGPT without thinking too much about critical thinking:
It’s a simple task [make a passage professional], and I knew ChatGPT could do it without difficulty, so I just never thought about it, as critical thinking didn’t feel relevant.
Some workers used genAI programs to speed up their work and ensure they weren’t falling behind, even if that meant employing less critical thinking. Participant P295 is a sales worker who used AI to get work done fast:
I must reach a certain quota daily or risk losing my job. Ergo, I use AI to save time and don’t have much room to ponder over the result.
On the other hand, when workers did not fully trust the AI to complete the task, they were more likely to perform critical thinking tasks, like verifying the output, looking for errors, or correcting the AI’s work.
Participant P92 is a website editor who wasn’t happy with the AI-generated content:
The output is way too cookie cutter, full of cliché [text], and boring. I have to edit it a lot to get something out of it that I could ever give to my bosses.
Some respondents used critical thinking to keep the AI on track and ensure it provided the answers that matched their needs. Subject P110 used Copilot to learn a subject but figured out the AI could veer off-track:
Its answers are prone to several [diversions] along the way. I need to constantly make sure the AI is following along the correct ‘thought process,’ as inconsistencies evolve and amplify as I keep interacting with the AI.
The higher the stakes related to someone’s job performance, the more likely it is for a human to engage in critical thinking when assessing the help of AI. Participant P267 is a pharmacist who used ChatGPT to write continuing professional development (CPD) documents, so they had to double-check what AI generated:
The entry is to be submitted for review, so I would [sic] to double check to be sure otherwise I might have to face suspension.
Interestingly, workers who relied on AI to produce content tended to generate “a less diverse set of outcomes for the same task.” The more you work with products like ChatGPT and understand how they’re trained and operate, the likelier it is that you will expect this outcome.
Generative AI programs produce content based on a specific set of data. They’ve seen large parts of the internet, and they know how humans like information presented to them.
AI doesn’t create anything natively, as humans do. Instead, it regurgitates information and predicts what we need. There will come a time when AI variants might be able to display creativity on par with humans or exceed it. But we’re not there yet.
The researchers saw the lack of more diverse outcomes as another indication that critical thinking is deteriorating for some workers who are overly reliant on AI to complete their tasks.
As a longtime ChatGPT user and close observer of the AI industry, this all makes sense. I keep telling you to always remember that ChatGPT can make mistakes because I don’t fully trust the AI. ChatGPT and other programs still make mistakes, and I’ll always want to verify its claims.
That’s something you should remember if you’re using ChatGPT at work. As for using generative AI for creative purposes, you shouldn’t rely on the AI to get the job done. It will be quicker in the short term but will affect your abilities in the long term.
On the other hand, the study’s findings shouldn’t prevent you from using AI for work purposes and personal projects. If anything, you should start talking to ChatGPT and other genAI programs if you’re not already doing it. Mastering the art of conversing with AI will become a key skill in a future where AI might take over your job or parts of it. Your ability to engage creatively with the AI and deploy critical thinking skills will be increasingly important.
The full Microsoft AI study is available at this link.