Cognitive Phantoms in AI: Do Machine Personalities Exist?
AI models don’t possess human-like personalities despite mimicking human responses in psychological tests, highlighting the need for new tools to study machine behavior.
LLMs are now part of daily life, assisting with tasks like answering questions or playing games. But their behavior is hard to understand because they are complex.
This has led to an interesting question: can we use psychological tests made for humans to understand LLMs' personalities?
Researchers looked into whether personality tests used for people could be used to measure similar traits in LLMs. These tests usually assess things like honesty and empathy, but they assume these traits exist in AI in the same way they do in humans.
By comparing results from humans and three LLMs using well-established personality tests, the researchers found that while LLMs generated responses, the underlying human-like traits these tests measure may not exist in the models at all.
In essence, the AI seems to mimic human-like behaviors when responding to certain questions, but this doesn’t mean that the AI has a "personality" like humans. The researchers found that the LLMs' answers often resulted in inconsistent and arbitrary patterns when analyzed more deeply.
This suggests that while LLMs can generate human-sounding responses, it doesn’t mean these responses reflect true underlying traits like empathy or honesty. This research serves as a caution against drawing human-like conclusions about AI behavior based on tools not designed for machines.
The research suggests this approach may be flawed and could lead us to see traits that don't really exist in machines—what the authors call cognitive phantoms.
As LLMs continue to evolve and play bigger roles in society, it becomes crucial to develop new ways to study their behavior and ensure that we’re not chasing illusions.