5:40 PM - 6:00 PM
[2F6-OS-16b-02] A Study on Knowledge Distillation and Transfer by Reward Design in Multi-agent Reinforcement Learning
[[Online]]
Keywords:Multiagent System, Reinforcement Learning, Reward Design, Knowledge Transfer
We aim to distill the knowledge and to deploy the knowledge to unknown environments by transferring and combining the knowledge in multi-agent reinforcement learning. This paper proposed an implicit cooperative learning method as the way how knowledge distillation becomes available in multi-agent reinforcement learning. The proposed method makes agents learn cooperative behavior with limited information to distill the own knowledge. In addition, this paper discussed the distilled knowledge and that transfer. Concretely, under the assumption that a reward function can be divided to three terms: a term that the self agent can act to change, a term that the other agent can act to change, and a term that changes due to interactions between agents, the proposed method makes agents learn to increase the own terms of reward function and the terms for interaction without unexpected interaction. This paper investigated the performance of the proposed method by comparing with Self-other modeling and Asynchronous Advantage Actor-Critic. The experimental results showed that the proposed method uses scarce amount of information than the conventional methods and performed equals and greater, demonstrating the distillation. This paper also discussed the results of the proposed method to provide some insights and perspectives on knowledge transfer.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.