11:00 AM - 11:20 AM
[4H2-GS-11c-01] Explainability of autonomous agents by important state extraction: Extension to continuous state spaces
Keywords:Explainability, Autonomous Agents, Reinforcement Learning
In order for an autonomous agent to become a human partner, it is important not only to improve performance of its decision-making algorithm, but also to be trusted by users for the agent's decisions. To this end, we proposed a framework for the autonomous agents in Markov decision process (MDP) to briefly explain the transition from the current state to the target state to the user by presenting important scenes. However, it cannot be directly adapted to continuous state spaces because it assumes discrete ones. In this study, we extend this explanation method to a continuous state space. In the proposed method, the continuous state space is first clustered, and the clusters that should be passed in order to reach the target state are identified. Then, in that cluster, the state that the agent passes through when transit to the target state is presented as an important scene. In order to evaluate the explanation generation ability of the proposed method, we conducted an experiment using a simulation environment in which the passage of multiple states is required for gaining reward.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.