JSAI2023

Presentation information

Organized Session

Organized Session » OS-12

[2R4-OS-12] 機械学習品質評価・向上技術

Wed. Jun 7, 2023 1:30 PM - 3:10 PM Room R (602)

オーガナイザ:磯部 祥尚、小林 健一、中島 震

2:30 PM - 2:50 PM

[2R4-OS-12-04] Probabilistically Certified Evaluation of Machine-Learned Models by Noise-Added Generalization Error Bounds

〇Yoshinao Isobe1 (1. National Institute of Advanced Industrial Science and Technology)

Keywords:machine-learned model, generalization error bound, probabilistic certification, evaluation indicator

Currently, evaluation indicators, such as accuracy, precision, and recall, for datasets are widely used for evaluating machine-learned models represented by deep neural networks, but it is difficult to guarantee performance for unseen data not included in the datasets by such evaluation indicators. In this presentation, we explain how to use the noise-added generalization error bounds as an evaluation indicator to probabilistically guarantee performance (incorrect-rate) even for such unseen data, based on statistical learning theory, and report experimental results for demonstrating the effectiveness of the indicator. Here, the generalization error is the expected value of the incorrect-rate of the output of a machine-learned model for all input data selected according to a probability distribution. In general, it is difficult to exactly compute the generalization error because of the innumerable of input data, but there have been a lot of related works on bounds of the generalization error. We apply well-known theorems on training-set-based generalization error bounds called PAC-Bayesian bounds, as testing-set-based bounds, to compute bounds close to generalization errors.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password