Logo Logo
Help
Contact
Switch Language to German

Phan, Thomy; Schmid, Kyrill; Belzner, Lenz; Gabor, Thomas; Feld, Sebastian and Linnhoff-Popien, Claudia (2019): Distributed Policy Iteration for Scalable Approximation of Cooperative Multi-Agent Policies. In: Aamas '19: Proceedings of the 18Th International Conference on Autonomous Agents and Multiagent Systems: pp. 2162-2164

Full text not available from 'Open Access LMU'.

Abstract

We propose Strong Emergent Policy (STEP) approximation, a scalable approach to learn strong decentralized policies for cooperative MAS with a distributed variant of policy iteration. For that, we use function approximation to learn from action recommendations of a decentralized multi-agent planning algorithm. STEP combines decentralized multi-agent planning with centralized learning, only requiring a generative model for distributed black box optimization. We experimentally evaluate STEP in two challenging and stochastic domains with large state and joint action spaces and show that STEP is able to learn stronger policies than standard multi-agent reinforcement learning algorithms, when combining multi-agent open-loop planning with centralized function approximation. The learned policies can be reintegrated into the multi-agent planning process to further improve performance.

Actions (login required)

View Item View Item