1:20 PM - 1:40 PM
[1J3-OS-10-01] Convex Optimization Theory and Algorithms for Hyperparameter Optimization: Toward AutoML
Optimization problems in machine learning (ML) often contain several tunable parameters called hyper-parameters, and careful hyper-parameter tuning is indispensable for constructing good models. If we naively solve the optimization problem for each candidate of hyper-parameters, the computational cost could be extremely large. In the field of convex optimization, there are several techniques to analyze the relationship between changes in optimal solutions and changes of hyper-parameters, and these techniques can be utilized for efficient hyper-parameter tuning. However, most of the current state of the art ML method including deep neural networks (DNN), are formulated as non-convex optimization problems, and thus the above convex-optimization techniques cannot be used as they are. In this talk, we first present the theories and algorithms of hyper-parameter tuning in convex optimization field and discuss the application of these techniques to non-convex optimization problems such as DNN.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.