JSAI2020

Presentation information

Organized Session

Organized Session » OS-26

[4N2-OS-26a] OS-26 (1)

Fri. Jun 12, 2020 12:00 PM - 1:40 PM Room N (jsai2020online-14)

神崎 宣次、久木田 水生、服部 宏充

12:00 PM - 12:20 PM

[4N2-OS-26a-01] Vulnerability to Adversarial Attacks caused by Fairness

〇Koki Wataoka1, Takashi Matsubara1, Kuniaki Uehara1 (1. Kobe University)

Keywords:Fairness, Adversarial Attacks, Machine Learning, Classifier

It is essential to ensure that classifiers are fair to gender and age. Hence, many methods have been proposed to make classifiers fair. However, there is no discussion about the security of fair classifiers. In order to secure machine learning, we must consider adversarial attacks that degrade the accuracy of classifiers. In this paper, we demonstrate that fair classifiers are vulnerable to adversarial attacks. Our experiments have shown that fair classifiers are less robust against adversarial attacks than usual classifiers, and hence show worse classification accuracy and fairness performances.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password