Continuous-Time Model-Based Reinforcement Learning

By Ç. Yıldız et al.
Published on June 11, 2021
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Continuous-Time Model-Based Reinforcement Learning
3. Dynamics Learning
4. Continuous-Time Actor-Critic Algorithm

Summary

Model-based reinforcement learning approaches rely on discrete-time state transition models, but many control tasks operate in continuous-time. This paper proposes a continuous-time MBRL framework based on a novel actor-critic method. The approach uses Bayesian neural ODEs to infer unknown state evolution differentials and explicitly solves continuous-time control systems. Experimental results show the robustness of the model against noisy data and its effectiveness in solving control problems. The paper discusses challenges and solutions related to continuous-time reinforcement learning and presents a comprehensive approach to address them.
×
This is where the content will go.