Logo Logo
Hilfe
Hilfe
Switch Language to English

Phan, Thomy; Schmid, Kyrill; Belzner, Lenz; Gabor, Thomas; Feld, Sebastian und Linnhoff-Popien, Claudia (2019): Distributed Policy Iteration for Scalable Approximation of Cooperative Multi-Agent Policies. In: Aamas '19: Proceedings of the 18Th International Conference on Autonomous Agents and Multiagent Systems: S. 2162-2164

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

We propose Strong Emergent Policy (STEP) approximation, a scalable approach to learn strong decentralized policies for cooperative MAS with a distributed variant of policy iteration. For that, we use function approximation to learn from action recommendations of a decentralized multi-agent planning algorithm. STEP combines decentralized multi-agent planning with centralized learning, only requiring a generative model for distributed black box optimization. We experimentally evaluate STEP in two challenging and stochastic domains with large state and joint action spaces and show that STEP is able to learn stronger policies than standard multi-agent reinforcement learning algorithms, when combining multi-agent open-loop planning with centralized function approximation. The learned policies can be reintegrated into the multi-agent planning process to further improve performance.

Dokument bearbeiten Dokument bearbeiten