4:00 PM - 4:20 PM
[1Q4-GS-11-03] Parallel Deep Reinforcement Learning with Model-Free and Model-Based Methods
Keywords:reinforcement learning, multiple module, model-based and model-free, model learning
Reinforcement learning can be categorized into model-based methods that exploit an (estimated) environmental model, and model-free methods that directly learn a policy through the interaction with the environment. To improve learning efficiency, we have proposed CRAIL, which dynamically selects a learning module from multiple heterogeneous modules according to learning performance while multiple modules are trained simultaneously. However, CRAIL does not consider model-based methods. This study extends CRAIL to deal with model-based and model-free methods and investigates whether dynamic switching between them contributes to the improvement of learning efficiency. The proposed method was evaluated by MuJoCo benchmark tasks. Experimental results show that a model-based method with a simple model was selected at the early stage of learning, and a model-based method with a complicated model was used at the later stage. Furthermore, model-free methods were selected when the network did not have sufficient capacity to represent the environmental dynamics.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.