The Troubling Emergence of Hallucination in Large Language Models – An Extensive Definition, Quantification, and Prescriptive Remediations

By Vipula Rawte et al
Published on Oct. 23, 2023
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1. Hallucination: The What and Why
2. A Holistic View of the Hallucination Spectrum: its Types and Scales
2.1 Orientations of Hallucination
2.2 Categories of Hallucination
2.3 Degrees of Hallucination
3. Halluc Ination e LiciTation dataset
3.1 Choice of LLMs: Rationale and Coverage

Summary

The document discusses the emerging issue of hallucinations in Large Language Models (LLMs), offering a detailed categorization of hallucination types, orientations, and degrees. It presents a dataset called Halluc Ination e LiciTation (HILT) with 75,000 samples generated by 15 contemporary LLMs. The paper proposes the Hallucination Vulnerability Index (HVI) to quantify LLMs' vulnerability to producing hallucinations. Various mitigation strategies and the importance of assessing LLMs' vulnerability are also emphasized.
×
This is where the content will go.