3:30 PM - 3:50 PM
[3D5-GS-2-01] Performance comparison of multiple models with a small amount of labeling
Keywords:Active learning, Performance comparison
Supervised learning typically requires a large amount of data, and the cost of labeling can be significant if
obtaining labels is difficult. This is also true when evaluating a model. Active testing is introduced to address this
challenge by estimating the average test loss through actively labeling a portion of test data. However, in practical
machine learning, multiple good model candidates are often available instead of just one. The problem of interest
then becomes selecting the best performing model. In this study, we propose a method to compare the performance
of multiple models with a small amount of labeling by extending the framework of active testing.
obtaining labels is difficult. This is also true when evaluating a model. Active testing is introduced to address this
challenge by estimating the average test loss through actively labeling a portion of test data. However, in practical
machine learning, multiple good model candidates are often available instead of just one. The problem of interest
then becomes selecting the best performing model. In this study, we propose a method to compare the performance
of multiple models with a small amount of labeling by extending the framework of active testing.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.