Igniting Language Intelligence: The Hitchhiker’s Guide

By Zhuosheng Zhang et al
Published on Nov. 20, 2023
Read the original document by opening this link in a new tab.

Table of Contents

Interface: Instruction: What time is it in Berlin?Thought: WhatI seeisasearching page withasearchbar.I need toclick the search bar to type the question.Action: {"action": "click", "item": "search bar"}68F in Mountain View3 Braves free agents who won’t be back next season and whyPrevious Actions: {"step_idx": 0, "action_description": "click [HOME Icon]"}{"step_idx": 1, "action_description": "click [Google Icon]"}

Summary

Large language models (LLMs) have dramatically enhanced the field of language intelligence, as demonstrably evidenced by their formidable empirical performance across a spectrum of complex reasoning tasks. The CoT reasoning approach has not only exhibited proficiency in amplifying reasoning performance but also in enhancing interpretability, controllability, and flexibility. Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents. This survey paper orchestrates a thorough discourse, penetrating vital research dimensions, encompassing the foundational mechanics of CoT techniques, the paradigm shift in CoT, and the burgeoning of language agents fortified by CoT approaches. CoT reasoning is effective when an LLM is employed, the task requires multi-step reasoning, and the performance of direct prompting does not increase significantly with model size. The paper aims to offer comprehensive knowledge of CoT reasoning and language agents to a wide audience.
×
This is where the content will go.