9:40 AM - 10:00 AM
[3O1-OS-16b-03] Improving Accuracy of Flexible Object Manipulation through Depth-Aware Motion Generation Model Handling Multimodal Information
Keywords:Deep Predictive Learning, Real-time Motion Generation
In this study, we achieve recognition and motion planning for flexible objects considering depth information. We propose a motion generation model that incorporates a model expressing disparity as a difference in the position of the point of attention in the left and right stereo images into a model that suppresses learning bias for each modality, and is added tactile information to solve occlusion and to improve motion accuracy. To validate the effectiveness of our proposed approach, we adopt the task of hanging a suit on a hanger. Given the changing depth positions of the suit's shape and the hem to be grasped, accurate motion generation with depth awareness is crucial. We conducted experiments using the dual-armed multi-degree-of-freedom robot Dry-AIREC to compare several comparative models (monocular/stereo vision, with/without tactile perception), and confirmed that tactile and disparity information contribute to the understanding of depth and the improvement of accuracy of motion.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.