Adversarial Examples are malicious input data created by adding a small perturbation to original input data, and these data make a classifier outputs a wrong result. This kind of malicious inputs are serious concerns for safety and security of AI systems being used in real world situations. In this work, we show an experimental result which indicates effectiveness and ineffectiveness of adversarial examples based on fast gradient sign method attack against a one-class classifier using reconstruction loss of generative adversarial network.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.