Logo Logo
Hilfe
Hilfe
Switch Language to English

Nüßlein, Jonas; Illium, Steffen; Müller, Robert; Gabor, Thomas und Linnhoff-Popien, Claudia (2022): Case-Based Inverse Reinforcement Learning Using Temporal Coherence. 30th International Conference on Case-Based Reasoning Research and Development (ICCBR 2022), Nancy, France, September 12–15, 2022. Keane, Mark T. und Wiratunga, Nirmalie (Hrsg.): In: Case-Based Reasoning Research and Development, Lecture Notes in Artificial Intelligence (LNAI) Bd. 13405 Cham: Springer. S. 304-317

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Providing expert trajectories in the context of Imitation Learning is often expensive and time-consuming. The goal must therefore be to create algorithms which require as little expert data as possible. In this paper we present an algorithm that imitates the higher-level strategy of the expert rather than just imitating the expert on action level, which we hypothesize requires less expert data and makes training more stable. As a prior, we assume that the higher-level strategy is to reach an unknown target state area, which we hypothesize is a valid prior for many domains in Reinforcement Learning. The target state area is unknown, but since the expert has demonstrated how to reach it, the agent tries to reach states similar to the expert. Building on the idea of Temporal Coherence, our algorithm trains a neural network to predict whether two states are similar, in the sense that they may occur close in time. During inference, the agent compares its current state with expert states from a Case Base for similarity. The results show that our approach can still learn a near-optimal policy in settings with very little expert data, where algorithms that try to imitate the expert at the action level can no longer do so.

Dokument bearbeiten Dokument bearbeiten