2:20 PM - 2:40 PM
[3O4-OS-44b-03] Multi-Agent Reinforcement Learning based on Variational Bayesian Naming Game
Keywords:Symbol Emergence, Multi-Agent Reinforcement Learning, Variational Inference
To achieve cooperative behavior, humans must infer the purpose and thoughts (internal states) of others. Since direct observation of internal states is difficult, humans estimate these states by communication using symbols, such as language. These symbols emerge uniquely based on the group and purpose, a process known as "emergent communication." Conventional multi-agent reinforcement learning methods based on emergent communication utilize the Metropolis-Hastings naming game, assuming a natural setting where independent agents communicate with each other. However, these methods take high computational costs due to the use of sampling for parameter inference and are limited to two-agent scenarios. In this paper, we propose a novel approach combining the Variational Bayesian Naming Game and the Soft Actor-Critic algorithm to reduce computational costs and enable cooperative learning by multiple agents. In this experiment, using a navigation task, where agents aim to reach a goal without collisions through communication, validate the effectiveness of the proposed method.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.