2:30 PM - 2:50 PM
[2R4-OS-12-04] Probabilistically Certified Evaluation of Machine-Learned Models by Noise-Added Generalization Error Bounds
Keywords:machine-learned model, generalization error bound, probabilistic certification, evaluation indicator
Currently, evaluation indicators, such as accuracy, precision, and recall, for datasets are widely used for evaluating machine-learned models represented by deep neural networks, but it is difficult to guarantee performance for unseen data not included in the datasets by such evaluation indicators. In this presentation, we explain how to use the noise-added generalization error bounds as an evaluation indicator to probabilistically guarantee performance (incorrect-rate) even for such unseen data, based on statistical learning theory, and report experimental results for demonstrating the effectiveness of the indicator. Here, the generalization error is the expected value of the incorrect-rate of the output of a machine-learned model for all input data selected according to a probability distribution. In general, it is difficult to exactly compute the generalization error because of the innumerable of input data, but there have been a lot of related works on bounds of the generalization error. We apply well-known theorems on training-set-based generalization error bounds called PAC-Bayesian bounds, as testing-set-based bounds, to compute bounds close to generalization errors.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.