12:40 PM - 1:00 PM
[4G2-GS-7-03] Autonomous adjustment of exploration in weakly supervised reinforcement learning
Keywords:reinforcement learning, satisficing, autonomy
Optimization in vast search spaces may be intractable, especially in reinforcement learning, and when the environment is real. On the other hand, humans seem to balance exploration and exploitation quite well in many tasks, and one reason is because they satisfice rather than optimize. That is to say, they stop exploring when a certain (aspiration) level is satisfied. Takahashi and others have introduced the risk-sensitive satisficing (RS) model that realizes efficient satisficing in the bandit problems. To enable the application of RS to general reinforcement learning tasks, the global reference conversion (GRC) was introduced. GRC allocates local aspiration levels to individual states from the global aspiration level, based on the difference between the global goal and the actual profits. However, its performance depends sensitively on the scale parameter. In this paper, we propose a new algorithm that autonomously adjusts the allocation and evaluates the current satisfaction accurately.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.