[3Xin2-46] Robustness Evaluation of Offline Reinforcement Learning Methods to Perturbations in Joint Torque Signals
Keywords:Offline Reinforcement Learning, Robot Control, Robustness Evaluation, Adversarial Perturbations
In recent years, offline reinforcement learning, which learns solely from datasets without environmental interaction, has gained attention. This approach, similar to traditional deep reinforcement learning, is particularly promising for robot control applications. Nevertheless, its robustness against real-world challenges, such as joint actuator faults in robots, remains a critical concern. This study aims to develop an offline reinforcement learning method that is resilient to such failures. As an initial study, we assessed the robustness of existing offline reinforcement learning methods. Using robots from OpenAI Gym, we simulated failures by introducing both random and adversarial perturbations, representing worst-case scenarios, into the joint torque signals. Robustness was evaluated based on episode rewards. Our experiments reveal that existing offline reinforcement learning methods are vulnerable to these perturbations, highlighting the need for more robust approaches in this field.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.