Chain-Of-Verification Reduces Hallucination in Large Language Models

By Shehzaad Dhuliawala et al
Published on Sept. 25, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Chain-of-Verification
4. Experiments

Summary

The document discusses the Chain-Of-Verification (CoVe) method to reduce hallucinations in large language models. It introduces a process involving generating a baseline response, planning verifications, executing verifications, and generating a final verified response. The approach aims to improve the correctness of responses generated by language models through a series of steps. Various experimental benchmarks are used to measure the effectiveness of CoVe in reducing hallucinations across different tasks.
×
This is where the content will go.