Simple Embodied Language Learning as a Byproduct of Meta-Reinforcement Learning

By E. Z. Liu et al
Published on June 10, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Preliminaries
4. Designing an Environment for Evaluating Language Emergence
4.1. The Office Environment
5. Experiment Results

Summary

This paper explores the concept of simple embodied language learning as a byproduct of meta-reinforcement learning. It discusses how language emerges in human children as a result of solving non-language tasks and poses the question of whether embodied reinforcement learning agents can also learn language indirectly from non-language tasks. The study involves a multi-task environment with varied language, specifically focusing on an office navigation scenario where agents are required to find a particular office. The agents are trained using current meta-RL algorithms and successfully generalize to reading floor plans with different layouts and language phrases. The research demonstrates that RL agents can indeed learn language indirectly without direct language supervision. The paper also delves into the design of the office environment and the language incorporated within it to facilitate language learning as a byproduct of task-solving.
×
This is where the content will go.