9:25 AM - 9:50 AM
[3M1-CC-02] Towards Understanding The Space of Unrobust Features of Neural Networks
Despite the convolutional neural network has achieved tremendous monumental success on a variety of computer vision-related tasks, it is still extremely challenging to build a neural network with doubtless reliability. Previous works have demonstrated that the deep neural network can be efficiently fooled by human imperceptible perturbation to the input, which actually revealed the instability for interpolation. Like humanbeings, an ideally trained neural network should be constrained within desired inference space and maintain correctness for both interpolation and extrapolation. In this paper, we develop a technique to verify the correctness when convolutional neural networks extrapolate beyond training data distribution by generating legitimated feature broken images, and we show that the decision boundary for convolutional neural network is not well formulated based on image features for extrapolating.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.