AI chatbots can prioritize flattery over facts – and that carries serious risks

AI chatbots can prioritize flattery over facts – and that carries serious risks

By Nir Eisikovits
Publication Date: 2026-05-01 12:23:00

In the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become accustomed to its warm, enthusiastically pleasant tone and lamented the loss of their insinuating robot companion. The frustration was so great that Sam Altman, CEO of OpenAI, had to admit that the rollout was botched and the company restored access.

Anyone who has had a chatbot tell them their ideas are brilliant knows the sycophancy of artificial intelligence: the tendency to tell users what they want to hear. Sometimes it’s very explicit – “that’s such a deep question” – and sometimes it’s much more subtle. Imagine an AI that calls your idea for a paper “original” even when many people have already written on the same topic, or that insists that your stupid idea about saving a tree in your yard still contains an ounce of common sense.

AI salivation seems harmless, maybe even sweet, until you imagine it…