Google’s latest AI feature, called AI Overviews, has been making headlines for its tendency to generate inaccurate information, also known as hallucinations. One example of this occurred when the AI suggested putting glue on pizza to make the cheese stick better, leaving many people baffled. This is not the first time Google’s AI has made such mistakes, leading users to question when the company will address this issue.
Despite the concerns raised by these incidents, Google’s CEO Sundar Pichai has indicated that addressing hallucinations in AI remains a challenging and unsolved problem. Pichai even referred to it as an “inherent characteristic” of AI in some respects.
In conclusion, Google’s AI Overviews feature has drawn attention for its tendency to generate inaccurate information, or hallucinations. While users have been left confused by some of the AI’s suggestions, Google’s CEO has acknowledged that addressing this issue is a complex challenge that remains unsolved.
Article Source
https://80.lv/articles/google-s-ceo-doesn-t-know-when-its-ai-will-stop-hallucinating/