Experts are concerned about the inaccurate responses generated by Google’s AI tool

Experts are concerned about the inaccurate responses generated by Google’s AI tool

Google has recently revamped its search engine to include AI-generated instant answers, which are sometimes inaccurate. This new feature has raised concerns among experts who worry about the spread of bias and misinformation. Examples of false information given by Google’s AI include claims about cats being on the moon and the United States having had a Muslim president.

Artificial intelligence language models used by Google to generate these answers work by predicting words based on data they have been trained on, making them susceptible to errors and hallucinations. While some answers may be accurate and comprehensive, there is a risk of misleading information being spread, especially in emergency situations where people may blindly trust the first answer they receive.

Experts have also expressed concerns about the potential for AI language models to perpetuate biases and misinformation present in the data they have been trained on. There are worries that relying on AI for information retrieval could diminish the human experience of seeking knowledge and connecting with others online.

Google’s competitors in the AI field, such as OpenAI and Perplexity AI, are closely monitoring the situation and have criticized Google for rushing out the new feature without ensuring its accuracy. The battle for dominance in the field of question-and-answer artificial intelligence is heating up, with Google facing pressure to keep up with its rivals.

Overall, while Google’s AI summaries feature may provide convenient answers to queries, its potential for spreading misinformation and bias raises significant concerns among experts and competitors in the field. Google has promised to address errors in its content policies and implement broader improvements, but the risk of misleading information being circulated remains a pressing issue. It is essential for users to approach AI-generated information with caution and critical thinking, especially in vital situations where accurate information is crucial.

Article Source
https://www.voanews.com/a/google-s-ai-tool-producing-misleading-responses-that-have-experts-worried/7626590.html