11:20 AM - 11:40 AM
[4G2-GS-2k-02] Semi-supervised Global Representation Learning of Trajectory for Matching Vision and Language in Navigation Task
Keywords:Imitation learning, Language instruction following, Deep learning
Constructing agents that can understand natural language instructions is useful, for example, for developing robots that can do household chores. However, in order to create agents that can adapt to various language instructions and environments using imitation learning, a huge amount of paired data, which is composed of <trajectory, language instruction>, is required. To tackle the problem, existing research has proposed to train a speaker model that generates language instructions from trajectories and to annotate artificially generated language instructions to unannotated trajectories. In this paper, in order to facilitate the learning of the speaker model, we propose to extract latent representation from a trajectory with semi-supervised representation learning, using paired data and additional trajectory data. Specifically, we constrain the latent representations to acquire only information about the language by considering the structure that language instructions correspond to the global representation of the trajectory. In the experiments, we evaluate the proposed method in BabyAI environment and show that the representation extracted from the trajectory by the proposed method acquires information about the language.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.