Knowledge Graph Prompting for Multi-Document Question Answering

By Yu Wang et al.
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Notations
3. Knowledge Graph Construction
4. LLM-based KG Traversal Agent

Summary

The paper discusses the 'pre-train, prompt, predict' paradigm in large language models (LLMs) for open-domain question answering and explores its application in multi-document question answering (MD-QA). It proposes a Knowledge Graph Prompting (KGP) method to enhance LLMs for MD-QA by formulating the right context using a graph construction module and a graph traversal module. The paper highlights the challenges of MD-QA, the construction of knowledge graphs over documents, and the use of LLM-based KG traversal agents for context retrieval. It presents experimental results showcasing the effectiveness of KGP for MD-QA and the potential of leveraging graphs in enhancing prompt design and retrieval augmented generation for LLMs.
×
This is where the content will go.