Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating

Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating

The main problem with big tech’s experiment with artificial intelligence (AI) is not that it could take over humanity. It’s that large language models (LLMs) like Open AI’s ChatGPT, Google’s Gemini and Meta’s Llama continue to get things wrong, and the problem is intractable.

Known as hallucinations, the most prominent example was perhaps the case of US law professor Jonathan Turley, who was falsely accused of sexual harassment by ChatGPT in 2023.

OpenAI’s solution seems…

Article Source
https://theconversation.com/neurosymbolic-ai-is-the-answer-to-large-language-models-inability-to-stop-hallucinating-257752