JSAI2018

Presentation information

Poster presentation

General Session » Interactive

[3Pin1] インタラクティブ(1)

Thu. Jun 7, 2018 9:00 AM - 10:40 AM Room P (4F Emerald Lobby)

9:00 AM - 10:40 AM

[3Pin1-30] Hybrid Policy Gradient for Deep Reinforcement Learning

〇praveen singh Thakur1, Masaru Sogabe1, Katsuyoshi Sakamoto2, Koichi Yamaguchi2, Dinesh Bahadur Malla2, Shinji Yokogawa2, Tomah Sogabe1,2 (1. GRID Inc., 2. The University of Electro-communications)

Keywords:Reinforcement Learning, , Policy Gradient, Continuous Action

In this paper, for stable learning and faster convergence in Reinforcement learning continuous action tasks, we propose an alternative way of updating the actor (policy) in Deep Deterministic Policy Gradient (DDPG) algorithm. In our proposed Hybrid-DDPG (shortly H-DDPG), at one time step actor is updated similar to DDPG and another time step, policy parameters are moved based on TD-error of critic. Once among 5 trial runs on RoboschoolInvertedPendulumSwingup-v1 environment, reward obtained at the early stage of training in H-DDPG is higher than DDPG. In Hybrid update, the policy gradients are weighted by TD-error. This results in 1) higher reward than DDPG 2) pushes the policy parameters to move in a direction such that the actions with higher reward likely to occur more than the other. This implies if the policy explores at early stages good rewards, the policy may converge quickly otherwise vice versa. However, among the remaining trial runs, H-DDPG performed same as DDPG.