JSAI2019

Presentation information

General Session

General Session » [GS] J-2 Machine learning

[1I3-J-2] Machine learning: advances in reinforcement learning

Tue. Jun 4, 2019 3:20 PM - 4:20 PM Room I (306+307 Small meeting rooms)

Chair:Masahiro Yukishima Reviewer:Kohei Miyaguchi

4:00 PM - 4:20 PM

[1I3-J-2-03] Imitation learning based on entropy-regularized reinforcement learning

〇Eiji Uchibe1 (1. Advanced Telecommunications Research Institute International)

Keywords:imitation learning, reinforcement learning, inverse reinforcement learning, entropy regularization

This paper proposes Entropy-Regularized Imitation Learning (ERIL) that is given by a combination of forward and inverse reinforcement learning. ERIL utilizes the soft Bellman optimality equation in which the reward function is augmented by the entropy of the learning policy and the Kullback-Leibler (KL) divergence between the learning and the baseline policies. We show that inverse RL is interpreted as estimating the log-ratio between two policies and the log-ratio is efficiently solved by binary logistic regression. Forward RL is given by a variant of Dynamic Policy Programming and our algorithm is interpreted as minimization of the KL divergence between the learning policy and the estimated expert policy. Experimental results on the MuJoCo-simulated environments show that ERIL is more sample efficient than the previous methods such as GAIL and AIRL because the forward RL step of ERIL is off-policy.