Meaning and Understanding in Large Language Models

By Vladimír Havlík et al
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
Introduction
Understanding language
Understanding 'understanding'
The Chinese room argument

Summary

This paper critically evaluates the prevailing tendency to regard machine language performance as mere syntactic manipulation and the simulation of understanding. It highlights the conditions crucial to attributing natural language understanding to state-of-the-art LLMs, arguing that LLMs not only use syntax but also semantics. The paper discusses the relationship between syntax and semantics within LLMs and addresses the 'symbol grounding problem'. It concludes by demonstrating how meanings are grounded in LLMs and how they can be attributed natural language understanding.
×
This is where the content will go.