JSAI2023

Presentation information

General Session

General Session » GS-2 Machine learning

[2K5-GS-2] Machine learning

Wed. Jun 7, 2023 3:30 PM - 5:10 PM Room K (C1)

座長:枌 尚弥(NEC) [オンライン]

3:30 PM - 3:50 PM

[2K5-GS-2-01] An Attempt to Rectify Classification Results Using Vulnerability of Adversarial Examples

〇Fumiya Morimoto1, Keigo Akagaki1, Satoshi Ono1 (1. Kagoshima University)

Keywords:Adversarial Defense, Neural Network, Deep Learning

Deep neural networks (DNNs) have shown high performance in various fields, such as image classification and speech recognition, and are being applied in real-world applications. On the other hand, recent studies have revealed that DNN-based classifiers have the vulnerability of misrecognizing Adversarial Examples (AEs), which are small and specially perturbed input data to the extent that they are difficult for humans to perceive. For this reason, research on defense methods against AEs has been widely conducted. For example, detection methods that discriminate AEs based on features of input samples have been proposed, but they only detect AEs and do not consider AEs’ correct categories. While many tasks can simply reject detected AEs, some tasks, such as sign recognition for autonomous driving, require correct categories of AEs. This is because, when an attack is made on a stop sign, DNNs with the defense method cannot recognize it as a stop sign though they can detect the attacked sign. Such tasks require some post-processing in addition to detect AEs. For this reason, we propose a label rectification method for AEs detected by the defense method, that is, a method to estimate the correct labels in the original images of the AEs. This method based on counter-attacking can correct the misclassification results to those of the original images.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password