2:20 PM - 2:40 PM
[4I3-GS-2-02] Improved Meta-learning by Parameter Adjustment via Latent Variables and Probabilistic Inference
Keywords:Meta Learning, Image Recognition, Deep Learning
Standard deep neural networks require large training data and fail to achieve good performance in the small data regime. To overcome this limitation, meta-learning approaches have recently been explored. The goal of meta-learning methods is to empower models to automatically learn across-task knowledge usually referred to meta-knowledge, so that task-specific knowledge of new tasks can be obtained using only few data. Among them, Model-Agnostic Meta-Learning or MAML is one of the best approaches, showing high performances in many settings. However, MAML does not consider varying effectiveness of meta-knowledge to each task, since learning rate is set constant across tasks. In this paper, we propose a model that adjusts learning rate for each task by introducing latent variables and applying probabilistic inference. We demonstrate that this approach improves the performance of MAML on few-shot image classification benchmark dataset, and confirm that learning rate is adaptively adjusted by visualizing latent variables.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.