4:00 PM - 4:20 PM
[3I4-GS-7a-03] Countermeasure of Adversarial Example Attack:Smoothing Filter-Based Denoising Technique
Keywords:Adversarial Example, Denoising
With the development of AI technology, the use of AI in non-critical fields that do not cause loss of life or environmental pollution has been progressing, and it is expected that AI will be introduced in critical fields such as critical infrastructure systems and automobiles in the future. In the academic field, many cases of security attacks have been reported, such as Adversarial Example Attack, in which malicious input is given to cause misjudgment. In light of these circumstances, AI security measures are recommended in AI guidelines formulated in Japan and overseas. Against this background, we are investigating countermeasures against Adversaroal Example Attack. In this paper, we introduce a denoising technique to process the input data without affecting the training model. In this paper, we present a denoising technique based on a smoothing filter, which removes noise by smoothing the brightness of the image. The proposed denoising method achieves a correct decision with about 85% accuracy for the Adversarial Example.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.