[4Xin1-60] Robust Deep reinforcement learning against adversarial attack and random noise on quadruped actuators
Keywords:Reinforcement Learning, Robot control, Adversarial attack
Quadruped robot control using deep reinforcement learning are vulnerable to adversarial attacks on the torque signals to its actuators. Deliberately designed perturbations to the torque signals can cause the robot to lose balance and fall over. This study aims to determine whether adversarial training, a technique commonly used in image recognition, can effectively defend against such attacks. Additionally, we evaluate its robustness against uniform random noise. Using the Ant and Unitree A1 quadruped robots in the MuJoCo physical simulator environment, we conducted validation experiments to measure the model's robustness against adversarial attacks in terms of rewards. We generated the adversarial torque signal using the differential evolution method. The experimental results demonstrate that adversarial training, utilizing a certain ratio of adversarial torque signals, is effective at countering both adversarial attacks and random noise.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.