Large Language Models as Generalizable Policies for Embodied Tasks

By Andrew Szot et al.
Published on April 16, 2024
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Method
4. Language Rearrangement Problem

Summary

The document discusses the adaptation of large language models to be generalizable policies for embodied visual tasks. It introduces the Large Language model Reinforcement Learning Policy (LLaRP) approach, demonstrating robustness and generalization capabilities in various tasks. The authors present evaluations on over 1,000 unseen tasks, achieving a 42% success rate. They also introduce the Language Rearrangement task as a benchmark for studying generalization in embodied AI.
×
This is where the content will go.