12:00 PM - 12:20 PM
[4N2-OS-26a-01] Vulnerability to Adversarial Attacks caused by Fairness
Keywords:Fairness, Adversarial Attacks, Machine Learning, Classifier
It is essential to ensure that classifiers are fair to gender and age. Hence, many methods have been proposed to make classifiers fair. However, there is no discussion about the security of fair classifiers. In order to secure machine learning, we must consider adversarial attacks that degrade the accuracy of classifiers. In this paper, we demonstrate that fair classifiers are vulnerable to adversarial attacks. Our experiments have shown that fair classifiers are less robust against adversarial attacks than usual classifiers, and hence show worse classification accuracy and fairness performances.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.