OpenAI:语言模型幻觉现象机制解析与缓解策略研究报告(英文版)
OpenAI:语言模型幻觉现象机制解析与缓解策略研究报告(英文版).pdf |
下载文档 |
资源简介
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such “hallucinations” persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinatio
本文档仅能预览20页