5:50 PM - 6:10 PM
[2J6-GS-2-01] Transferable Inverse Reinforcement Learning with Demonstrations in Multiple Dynamics
Keywords:Inverse Reinforcement Learning, Reinforcement Learning, Maximum Entropy
Recently, reinforcement learning (RL) has been showing increasingly high performance in a variety of complex tasks of decision making and control, but RL requires quite careful engineering of reward functions to solve real tasks. Inverse reinforcement learning (IRL) is a framework to construct reward functions by learning from demonstration, but the estimated reward function cannot be transferred to other dynamics due to its dynamics-dependent indefiniteness. To obtain transferable reward functions, we propose a novel mathematical formulation for fixing the dynamics-dependent indefiniteness of reward functions by utilizing demonstrations generated in multiple dynamics. We also show that the existing discussion on the indefiniteness of reward functions can be generalized from usual RL to maximum entropy RL, which serves as the subroutine forward solver in usual IRL algorithms based on maximum entropy IRL.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.