JSAI2020

Presentation information

General Session

General Session » J-2 Machine learning

[4J3-GS-2] Machine learning: Adversarial examples and security

Fri. Jun 12, 2020 2:00 PM - 3:20 PM Room J (jsai2020online-10)

座長:小林隼人(ヤフー株式会社)

2:20 PM - 2:40 PM

[4J3-GS-2-02] Evaluation of Denoising Autoencoder as the countermeasure against White-Box Adversarial Examples Attacks

〇Masahiro Miyazaki1, Kota Yosida1, Takumi Iida1, Haruki Masuda1, Takeshi Fujino1 (1. Univ. of Ritsumeikan)

Keywords:AI, Denoising Autoencoder, Adversarial Examples, Gaussian noise

Adversarial examples, which induce false recognition of CNN by adding a small perturbation to the input, are important security issues. As a countermeasure against such attacks, there is a study on removing perturbation using denoising autoencoder (DAE). However, in a white box attack scenario, it is known that adversarial examples can be generated by cascaded DAE-CNN. In this paper, we evaluate two types of DAEs with the MNIST dataset; one is an AdvDAE which is trained by adversarial examples from the CNN model, and the other is a GaussDAE which is trained by Gaussian noise superimposed images. Our experimental result shows that the AdvDAE is superior in removing perturbation of adversarial examples from the CNN model. On the other hand, when the adversarial examples generated against DAE-CNN, the GaussDAE required a large perturbation that could be easily detected by the human eye.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password