6:50 PM - 7:10 PM
[2L6-OS-19b-05] Estimating Generalization Error Bounds for Worst Weight-Perturbed Neural Classifiers
Keywords:neural classifier, generalization error bound, worst weight-perturbation, statistical guarantee
For evaluating neural classifiers, evaluation indicators, such as accuracy, precision, and recall, for datasets are widely used, but it is difficult to guarantee performance for unseen data not included in the datasets by such indicators. In this presentation, we propose a method to statistically guarantee the upper bounds of the expected-values (i.e. generalization errors) of misclassification rates in worst weight-perturbed classifiers for any input data including unseen data. Here, the worst weight-perturbations represent perturbations imposed on weight-parameters to misclassify, if possible, within given perturbation ranges. Such upper bounds can be estimated by randomly selected perturbations, but it is difficult in general to detect such worst weight-perturbations by random selection. Therefore, we combine random selection with gradient-based search for making the proposed method practical and reasonable. We experimentally demonstrate that the method can estimate the generalization error bounds of worst weight-perturbed classifiers and consider the usefulness of the method for evaluating classifiers.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.