6:00 PM - 6:20 PM
[2M6-OS-19d-03] Unlabelled Test sets Evaluation with Differentiable Automatic Data Augmentation
Keywords:Unlabelled Test Evaluation , Data Augmentation, Deep Learning
Application research of deep learning and the development of open data sets are in progress. Annotation to thecollected data is required for accuracy evaluation in the real world, but when operation under various conditions isassumed, the collected data becomes enormous and the problem of annotation arises. Therefore, in this study, wework on unlabeled test evaluation using labeled training data. By using a model (predictor) that can predict the accuracy of a given data set of a trained model (classifier), it is possible to predict the accuracy of an unlabeled testset. In order to train the predictor, prepare a dataset (meta-set) converted from the labeled training data. Since the meta-set is labeled with the original dataset, it can be formulated as a regression problem of accuracy by the trained classifier. At this time, the method of creating the meta-set, which is the learning data, is important. In this study, we propose a method to calculate the required meta-set statistic from the test set statistic and create the desired meta-set using differentiable data expansion. In this study, we performed experiments using benchmark datasets such as MNIST and SVHN and reported the results.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.