Why do AI models make things up or hallucinate? OpenAI says it has the answer and how to prevent it
Forrás:
Euronews.com
1 napja
OpenAI's research paper discusses how chatbots, powered by large language models (LLMs), tend to 'hallucinate' or guess answers when uncertain, rather than admitting ignorance. This behavior is linked to a binary classification error and a points system that rewards guessing. The paper highlights that while the new GPT-5 model claims to be 'hallucination-proof', a study shows that ChatGPT models still spread falsehoods in 40% of responses. The report concludes that LLMs will never achieve 100% accuracy due to some questions being inherently unanswerable. Teljes cikk (Euronews.com)