3:20 PM - 3:40 PM
[4H3-OS-6b-05] Learning and Generating Cooperative Behavior Based on Multi-Agent Symbol Emergence
Fusion of Control as Inference and Metropolis Naming Game
Keywords:multi-agent, symbol emergence, reinforcement learning
This paper proposes a generative probabilistic model (PGM) of emergent communication for multi-step cooperative tasks performed by two agents. The agents plan their actions by probabilistic inference, called control as inference, and messages communicated between two agents are latent variables and estimated based on the planned actions. Through these messages, each agent can send information about its own actions and know information about the actions of another agent.
Therefore, the agents change their actions according to the estimated messages to achieve cooperative tasks. This inference of messages can be considered as communication, and this procedure can be formulated by the Metropolis naming game. Through experiments in the grid world environment, we show that the proposed PGM can infer meaningful messages to achieve the cooperative task.
Therefore, the agents change their actions according to the estimated messages to achieve cooperative tasks. This inference of messages can be considered as communication, and this procedure can be formulated by the Metropolis naming game. Through experiments in the grid world environment, we show that the proposed PGM can infer meaningful messages to achieve the cooperative task.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.