By Vince Condarcuri
Publication Date: 2025-11-12 22:48:00
Modern AI language models are making a familiar human mistake: they speak with confidence, even when they’re wrong. According to tech giant IBM (IBM), these errors, often called “hallucinations,” are becoming more common in places where accuracy is critical, like legal filings, financial reports, and news summaries. In fact, a recent study by the European Broadcasting Union found that nearly half of the answers provided by major AI assistants were either incorrect or cited unverified sources. As a result, IBM researchers, such as Pin-Yu Chen, are focused on making AI more dependable.
Meet Your ETF AI Analyst
Chen explained that these systems don’t truly understand what they are saying. Instead, they just predict the next word based on patterns in data. As models get larger and more powerful, they also become more uncertain. IBM tests for this by intentionally pushing models to their limits and recording how they fail. While the results may sound fluent and convincing,…