JSAI2024

Presentation information

Organized Session

Organized Session » OS-16

[3O1-OS-16b] OS-16

Thu. May 30, 2024 9:00 AM - 10:40 AM Room O (Music studio hall)

オーガナイザ:鈴木 雅大(東京大学)、岩澤 有祐(東京大学)、河野 慎(東京大学)、熊谷 亘(東京大学)、松嶋 達也(東京大学)、森 友亮(株式会社スクウェア・エニックス)、松尾 豊(東京大学)

9:40 AM - 10:00 AM

[3O1-OS-16b-03] Improving Accuracy of Flexible Object Manipulation through Depth-Aware Motion Generation Model Handling Multimodal Information

〇Sachiya Fujita1, Hiroshi Ito1, Hideyuki Ichiwara1, Namiko Saito1, Ayuna Kubo1, Tetsuya Ogata1, Shigeki Sugano1 (1. Waseda University)

Keywords:Deep Predictive Learning, Real-time Motion Generation

In this study, we achieve recognition and motion planning for flexible objects considering depth information. We propose a motion generation model that incorporates a model expressing disparity as a difference in the position of the point of attention in the left and right stereo images into a model that suppresses learning bias for each modality, and is added tactile information to solve occlusion and to improve motion accuracy. To validate the effectiveness of our proposed approach, we adopt the task of hanging a suit on a hanger. Given the changing depth positions of the suit's shape and the hem to be grasped, accurate motion generation with depth awareness is crucial. We conducted experiments using the dual-armed multi-degree-of-freedom robot Dry-AIREC to compare several comparative models (monocular/stereo vision, with/without tactile perception), and confirmed that tactile and disparity information contribute to the understanding of depth and the improvement of accuracy of motion.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password