JSAI2021

Presentation information

International Session

International Session (Work in progress) » EW-2 Machine learning

[3N3-IS-2e] Machine learning (5/5)

Thu. Jun 10, 2021 3:20 PM - 5:00 PM Room N (IS room)

Chair: Hisashi Kashima (Kyoto University)

3:40 PM - 4:00 PM

[3N3-IS-2e-02] Improving Exploration and Convergence Speed with Multi-Actor Control DDPG

〇David John Lucien Felices1, Mitsuhiko Kimoto1, Shoya Matsumori1, Michita Imai1 (1. Keio University)

Keywords:Reinforcement Learning, DDPG, Multi-Actor, Deep Exploration, OpenAI Gym

In Reinforcement Learning, the Deep Deterministic Policy Gradient (DDPG) algorithm is considered to be a powerful tool for continuous control tasks. However, when it comes to complex environments, DDPG does not always show positive results due to its inefficient exploration mechanism. To deal with such issues, several studies decided to increase the number of actors, but without considering if there was an actual optimal number of actors that an agent could have.
We propose MAC-DDPG, which consists of a DDPG architecture with a variable number of actor networks. We also compare the computational cost and learning curves of using different numbers of actor networks on various OpenAI Gym environments.
The main goal of this research is to keep the computational cost as low as possible while improving deep exploration so that increasing the number of actors is not detrimental in solving less complex environments fast.
Currently, results show a potential increase in scores obtained on some environments (around +10%) compared with those obtained with classic DDPG, but greatly increase the time necessary to run the same number of epochs (time linearly increases with the number of actors).

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password