Large language models like ChatGPT and LLaMA have become known for their fluent, sometimes eerily human-like responses. However, they also have a well-documented problem of confidently producing information that is outright wrong. A new study suggests that how AI processes information might have surprising parallels with the way certain human brain disorders function.
Researchers at the University of Tokyo explored the internal signal dynamics of large language models and compared them to brain activity patterns found in people with Wernicke’s aphasia, a condition where individuals speak in a fluent but often meaningless or confused way. The similarities were unexpected.
In the study, scientists used a method called energy landscape analysis to map how information travels within both human brains and AI systems. This technique was originally developed in physics, and here it helped them visualize how internal states move and settle.

In both cases, they found erratic or rigid patterns that limited meaningful communication. Essentially, they found that AI models have similar patterns to individuals with aphasia. These patterns show that information sometimes follows internal paths that make it hard to access or organize relevant knowledge.
This sheds new light on how AI processes information. Despite their vast training datasets, models like ChatGPT can fall into what the researchers call internal “loops” that sound coherent but fail to produce accurate or useful responses. That’s not because the AI is malfunctioning, but because its internal structure may resemble a kind of rigid pattern processing, similar to what occurs in receptive aphasia.
The findings have implications beyond just AI, though. For neuroscience, they suggest new ways to classify or diagnose aphasia by looking at how the brain internally handles information, not just how it sounds externally. And this isn’t the first time AI has shown promise in helping treat medical conditions.
Researchers have also been working on AI that can detect autism, just by looking at how you grab things.
For AI engineers, this breakthrough may offer a blueprint for building systems that better access and organize stored knowledge. Understanding the parallels here might be the key to designing smarter and more trustworthy tools in the future, as well as finding new ways to work with brain disorders.