By Abhinav Singh
Publication Date: 2026-04-10 18:06:00
It is a well-documented fact that artificial intelligence (AI) models are prone to hallucinations, generating confident but false information. But what happens when they are purposely fed misinformation? Almira Osmanovic Thunstrom, a medical researcher at the University of Gothenburg, Sweden, set out to find the same using an experiment. Thunstrom created a fake eye condition called ‘Bixonimania’ and published two papers about it on a preprint server. Within weeks of her uploading information about the condition, attributed to an imaginary author, major AI chatbots began repeating the invented condition as if it were real.
Microsoft Copilot was the first major AI chatbot to pick the fake condition, describing Bixonimania as an “intriguing and relatively rare condition’. On the same day, Google’s Gemini explained that Bixonimania is a condition caused by “excessive exposure to blue light”. Perplexity said one in 90,000 were affected by Bixonimania, while OpenAI’s ChatGPT informed users about the symptoms to look out for.
Thunstrom said she conducted the experiment to test whether large language models (LLMs) would swallow the misinformation and then reproduce it as reputable health advice.
“I wanted to see if I can create a medical condition that did not exist in the database,” Thunstrom told Nature, adding that she created a health-related condition and…

