2:00 PM - 2:20 PM
[4J3-GS-2-01] On the Effectiveness of Adversarial Examples for One-Class Classifier
Keywords:Adversarial Examples, Deep Learning
Adversarial Examples are specially created inputs by adding a small perturbation to an input image, and these inputs make a classifier outputs a wrong result. Attack methods making this kind of malicious inputs are serious concerns for safety and security of AI systems in real world, so they have been extensively researched in recent years. In this work, we show effectiveness of adversarial examples against one-class classifiers, unlike previous researches dealing with many-class classifiers. Concretely, we show an experimental result which indicates effectiveness of fast gradient sign method attack against a one-class classifier based on reconstruction loss of variational autoencoders.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.