By Victor Tangermann
Publication Date: 2025-11-29 12:30:00
Researchers have found that if you tone down a large language model’s ability to lie, it is far more likely to claim it is self-aware.
Very few serious experts believe that today’s AI models are conscious, but many regular people think differently about the bots that are designed to foster emotional connections to maintain engagement. Users around the world have reported believing they are talking to conscious beings trapped in AI chatbots, a powerful illusion that has led entire fringe groups to demand AI rights to “personhood.”
Still, the behavior of large language models can be scary. As described in a not yet peer-reviewed article first discovered by Live Sciencea team of researchers from AI development and design agency AE Studio conducted a series of four experiments with Anthropic’s Claude, OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini – and found a truly strange phenomenon associated with AI models that claim to be conscious.
In a…