JSAI2021

Presentation information

General Session

General Session » GS-7 Vision, speech media processing

[3I4-GS-7a] 画像音声メディア処理:基礎

Thu. Jun 10, 2021 3:20 PM - 5:00 PM Room I (GS room 4)

座長:岡部 浩司(NEC)

4:00 PM - 4:20 PM

[3I4-GS-7a-03] Countermeasure of Adversarial Example Attack:Smoothing Filter-Based Denoising Technique

〇Chiaki Otahara1, Masayuki Yoshino1, Ken Naganuma1, Yumiko Togashi1, Sasa Shinya1, Non Kawana1, Kyohei Yamamoto1 (1. Hitachi.,Ltd.)

Keywords:Adversarial Example, Denoising

With the development of AI technology, the use of AI in non-critical fields that do not cause loss of life or environmental pollution has been progressing, and it is expected that AI will be introduced in critical fields such as critical infrastructure systems and automobiles in the future. In the academic field, many cases of security attacks have been reported, such as Adversarial Example Attack, in which malicious input is given to cause misjudgment. In light of these circumstances, AI security measures are recommended in AI guidelines formulated in Japan and overseas. Against this background, we are investigating countermeasures against Adversaroal Example Attack. In this paper, we introduce a denoising technique to process the input data without affecting the training model. In this paper, we present a denoising technique based on a smoothing filter, which removes noise by smoothing the brightness of the image. The proposed denoising method achieves a correct decision with about 85% accuracy for the Adversarial Example.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password