JSAI2023

Presentation information

Organized Session

Organized Session » OS-7

[1Q4-OS-7b] 統合AIへの展望

Tue. Jun 6, 2023 3:00 PM - 4:40 PM Room Q (601)

オーガナイザ:栗原 聡、山川 宏、三宅 陽一郎、谷口 彰、田和辻 可昌

3:40 PM - 4:00 PM

[1Q4-OS-7b-03] Understanding Language Instructions that Include the Vocabulary of Unobserved Objects by Integrating a Large Language Model and a Spatial Concept Model

〇Shoichi Hasegawa1, Ryosuke Yamaki1, Akira Taniguchi1, Yoshinobu Hagiwara1, Lotfi El Hafi1, Tadahiro Taniguchi1 (1. Ritsumeikan University)

Keywords:Integration of System1 and System2, Probabilistic Generative Model, Large Language Model, Spatial Concepts, Service Robot

For a robot to assist people in home environments, it is important to handle the vocabulary of unobserved objects while learning the knowledge of places. It is assumed that there exist objects that the robot did not observe through its sensors during learning. For such a case, the robot is expected to perform household tasks on language instructions that include the vocabulary of these objects. We propose a method that integrates a large language model and a spatial concept model to enable the robot to understand language instructions that include the vocabulary of unobserved objects while learning places. Even if the objects that the user instructed the robot to search for are not included in a training dataset during learning, the number of room visits during object search can be expected to reduce by combining the inference of these models. We validated our method in an experiment in which a robot searched for unobserved objects in a simulated environment. The results showed that our proposed method could reduce the number of room visits during the search compared to the baseline method.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password