AI is quietly poisoning itself and driving models to collapse – but there is a cure

AI is quietly poisoning itself and driving models to collapse – but there is a cure

By Steven Vaughan-Nichols
Publication Date: 2026-01-23 13:25:00

Arkadiusz Warguła via iStock/Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.


Key insights from ZDNET

  • When AI LLMs “learn” from other AIs, the result is GIGO.
  • You need to verify your data before you can trust your AI answers.
  • This approach requires a dedicated effort across your organization.

According to technology analyst Gartner, AI data is quickly becoming a classic garbage in/garbage out (GIGO) problem for users. That’s because companies’ AI systems and Large Language Models (LLMs) are flooded with unverified, AI-generated content that cannot be trusted.

Model collapse

You know this better as AI slop. While it’s annoying for you and me, it’s deadly for AI because it poisons the LLMs with fake data. The result is what is called “model collapse” in AI circles. AI company Aquant defined this trend as follows: “Simply put, when AI is trained on its own results, the results can deviate further from reality.”

Also: 4 new roles will lead the agent AI revolution -…