JSAI2019

Presentation information

International Session

International Session » [ES] E-2 Machine learning

[2H4-E-2] Machine learning: fusion of models

Wed. Jun 5, 2019 3:20 PM - 5:00 PM Room H (303+304 Small meeting rooms)

Chair: Naohiro Matsumura (Osaka University)

4:20 PM - 4:40 PM

[2H4-E-2-04] Gradient Descent Optimization by Reinforcement Learning

〇Yingda Zhu1, Teruaki Hayashi1, Yukio Ohsawa1 (1. The University of Tokyo)

Keywords:Deep Neural Network, Gradient Descent, Reinforcement Learning

Gradient descent, which helps to search the global minimum of a complex (high dimension) function, is widely used in the deep neural network to minimize the total loss. The representative methods: stochastic gradient descent (SGD) and ADAM (Kingma & Ba, 2014) are the dominant ones to train neural network today. While some sensitive hyper-parameters like learning rate will affect the descent speed or even the convergence. In previous work, these hyper-parameters are often fixed or set by feedback and experience. I propose using reinforcement learning (RL) to optimize the gradient descent process with neural network feedback as input and hyper-parameter action as output to control these hyper-parameters. The experiment results of using RL based optimizer in both fixed and random start point shows better performance than normal optimizers which are set by default hyper-parameters.