JSAI2020

Presentation information

Interactive Session

[3Rin4] Interactive 1

Thu. Jun 11, 2020 1:40 PM - 3:20 PM Room R01 (jsai2020online-2-33)

[3Rin4-62] Global Self-localization based on Classification and Semantic segmentation of Omni-directional Images

〇Sio Ryu1, Yuki Murata2, Masayasu Atsumi2 (1.Soka University, 2.Soka University Graduate School of Engineering)

Keywords:Self-localization, Semantic segmentation, Image classification

In order for a robot to move autonomously, it is necessary to estimate its location based on recognition of the surrounding environment.
In this study, we propose a deep neural network model for global self-localization based on omnidirectional image classification and semantic segmentation.
This model consists of the “spatial category estimation module”, which is based on image classification, the “surrounding region distribution estimation module”, which is based on semantic segmentation, and the “global location analysis module", which performs global self-localization from the results of spatial category estimation and surrounding region distribution estimation.
The accuracy of image classification and semantic segmentation was evaluated by experiments using an omnidirectional image dataset captured by a THETA-V camera.
In addition, we evaluated the global location estimation by an experiment in which subjects were required to plot the location on a map for given global location descriptions which were generated by the “global location analysis module".
From these results, we have confirmed that the proposed model achieves global self-localization in case there are enough region labels to identify locations.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password