1:50 PM - 2:10 PM
[3J3-OS-3a-02] Hyperparameter Optimization by Multi-objective Bayesian Optimization based on Inference of User Preference
Keywords:AutoML, Neural Network, Hyperparameter Optimization, Multi-objective Bayesian Optimization, Preference Learning
AutoML considers hyper-parameter optimization (HPO) of machine learning models. However, there often exist multiple evaluation indices for the learned models. For example, both model accuracy and memory size can be objective functions, which are typically in the trade-off relation. In this case, the importance of each objective function depends on the user preference. To incorporate the preference adaptively into HPO, we propose a preference-learning-based multi-objective Bayesian optimization (PL-MBO) method. Since directly specifying the exact preference can be difficult for the user, PL-MBO considers only querying a `relative preference’ that the user can give much easier. By combining a Bayesian user preference model and the standard Gaussian process model of objective functions, the expected improvement criterion of the user preference is derived. Our numerical experiments show that the optimal solution based on the user preference can be found efficiently in HPO for neural networks.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.