Keywords: Representation Learning, Reinforcement Learning, Game AI
In a general decision-making task, the options of action are expanded indefinitely due to the change of environment or the discovery of new action by agent. In a situation that the number of options increase, it is necessary for an agent to acquire an abstracted expression of actions autonomously. Here we propose a learning framework that solve this issue. In the proposed method, value function is approximated with embedded behavioral representations, which generalize the expression of actions, using state-tradition trajectories. We confirmed the efficiency of the framework using the mobile game "Gyakuten Othellonia". This game is a mixture of board game and trading card game and characters are added to the environment frequently, which is a good testbed to realize expandable action space. Finally, we show that, with the proposed framework, an agent can learn character's representation and utilize it to learn optimal strategies in the game.