5:00 PM - 5:20 PM
[1B4-OS-41b-05] On the Robustness of Object-Centric Representations for Model-Based Reinforcement Learning
Keywords:World models, Object-centric learning, Robustness
Model-based reinforcement learning (RL) is a promising approach to learning to control agents in a sample-efficient manner, but often struggles with generalization beyond tasks it was trained on. While previous work have explored using pretrained visual representations (PVR) to improve generalization, these approaches have not outperformed representations learned from scratch in out-of-distribution (OOD) settings. In this work, we propose to incorporate object-centric representations, which have demonstrated strong OOD generalization capabilities by learning compositional representations, into model-based RL with PVR. We investigate whether this object-centric inductive bias improves both sample efficiency and task performance across in-distribution and OOD environments.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.