4:50 PM - 5:10 PM
[2D5-OS-18b-04] Learning to acquire integrated representations of language, environment, and action: Understanding unknown linguistic commands by retrofitted word embeddings
Keywords:Human-Robot Interaction, representation acquisition, multimodal semantic representation, understanding unknown commands, symbol grounding
We propose a novel neural network model that acquires integrated representation of robotic actions and their linguistic descriptions containing unheard commands. Properly responding to the unheard commands in the living environment is a crucial ability for robots. Existing methods enabled robots to respond to the unheard commands, however few words with similar usage were included in the commands like “fast” and “slowly”. In this paper, we extended the bidirectional translation model of actions and descriptions proposed by Yamada et al. 2018. We appended nonlinear layers that retrofit the description network with pre-trained word embeddings. To train the proposed model, bidirectional translation of robotic actions and their descriptions are imposed. After training, the proposed model could estimate appropriate integrated representations of unheard commands and translate the ac- tions and descriptions bidirectionally. Visualization of the integrated representations shows that the representations are categorized according to the word meaning.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.