10:20 AM - 10:40 AM
[2M1-OS-19a-05] Representation Learning with Recursive Self-supervised learning
Keywords:Self-supervised Learning, Representation Learning
Conventional deep learning methods make a priori assumptions about the model structure for input and teacher data. Consequently, many individually optimized models have been proposed for each task. In recent years, self-supervised learning, in which models are learned from input data alone, without the use of supervisory data, to obtain a generic representation, has been actively studied. On the other hand, many of these methods still learn by pre-defining the model structure. We propose a framework of recursive self-supervised learning. The proposed method iteratively and recursively predicts the mid-layer features generated by a network trained by self-supervised learning. In this way, the proposed method stacks feature extraction layers in a bottom-up manner, producing higher-order features that integrate the input. Experiments qualitatively and quantitatively validated the effectiveness of the proposed method by weight visualization of the feature extraction layers and linear classification accuracy of the mid-layer features. The effectiveness of the proposed method was demonstrated when the task of self-supervised learning was properly set up. This study suggests that the proposed method can construct an appropriate structure depending on the input.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.