JSAI2020

Presentation information

General Session

General Session » J-2 Machine learning

[2J5-GS-2] Machine learning: Advancement reinforcement learning (1)

Wed. Jun 10, 2020 3:50 PM - 5:30 PM Room J (jsai2020online-10)

座長:内部英治(ATR)

3:50 PM - 4:10 PM

[2J5-GS-2-01] Robustness of Unsupervised Representation Learning on Continual Pre-Training for Image Classification

〇Hikaru Nakata1,2, Rio Yokota1,2 (1. Tokyo Institute of Technology, 2. AIST- Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL))

Keywords:Continual Learning, Representation Learning, Catastrophic Forgetting

In the real world, it is necessary to build a model that continuously learns a massive amount of data obtained from non-stationary data distribution, stores knowledge from the past data, and uses that knowledge for future tasks. However, models that have been trained continuously suffer from catastrophic forgetting, which reduces the accuracy of data learned in the past. Catastrophic forgetting is a long-standing issue in deep learning and machine learning.
In this work, we focus on unsupervised representation learning and aims to investigate the robustness of unsupervised representation learning against catastrophic forgetting. We compared unsupervised representation learning with supervised learning and knowledge distillation on continual representation learning task using image classification datasets divided into several classes.
As a result of this investigation, unsupervised representation learning was found to mitigate catastrophic forgetting compared to supervised learning and knowledge distillation.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password