JSAI2024

Presentation information

Organized Session

Organized Session » OS-16

[3O1-OS-16b] OS-16

Thu. May 30, 2024 9:00 AM - 10:40 AM Room O (Music studio hall)

オーガナイザ:鈴木 雅大(東京大学)、岩澤 有祐(東京大学)、河野 慎(東京大学)、熊谷 亘(東京大学)、松嶋 達也(東京大学)、森 友亮(株式会社スクウェア・エニックス)、松尾 豊(東京大学)

9:00 AM - 9:20 AM

[3O1-OS-16b-01] The Embodied World Model Based on LLM with Visual Information and Prediction-Oriented Prompts

〇Wakana Haijima1, Kou Nakakubo2, Kakeru Hirayama3, Masahiro Suzuki4, Yutaka Matsuo4 (1. University of York, 2. Kyushu Institute of Technology, 3. The University of Tokyo, 4. Graduate School of Engineering, The University of Tokyo)

[[Online]]

Keywords:World Model, LLM, Embodied AI, Visual Data, Prompting

In recent years, as machine learning, particularly for vision and language understanding, has been improved, research in embedded AI has also evolved. VOYAGER is a well-known LLM-based embodied AI that enables autonomous exploration in the Minecraft world, but it has issues such as underutilization of visual data and unclear function as a world model. In this research, the possibility of utilizing visual data and the function of LLM as a world model were investigated with the aim of improving the performance of embodied AI. The experimental results revealed that LLM can extract necessary information from visual data, and the utilization of the information improves its performance as a world model. It was also suggested that devised prompts could bring out the LLM's function as a world model.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password