5:00 PM - 5:20 PM
[3N5-OS-11b-05] Robustness of Fair Machine Learning Algorithm against Distribution Shift
Keywords:fairness, distribution shift
Fairness in machine learning is a problem where the learned machine learning model outputs biased decisions against individuals' sensitive attributes, such as race and gender, and has been recognized as a crucial problem in the machine learning community. Many researchers hence have devoted to the development of fair machine learning algorithms. Basically, these algorithms are specifically designed for the situation where the sample distributions between training and test phases are equivalent. However, such an assumption may not hold in a practical situation. For example, suppose we build a fair machine learning model from the applicants' resumes from five years ago to predict hiring decisions. Then, we can easily imagine that the rule of the hiring decision may change from five years ago to now because of the change in social situations. When the sample distribution changes, a model learned with the training sample might be unfair in the test sample. In this paper, we assess the possibility that such a situation occurs even with a small change of sample distributions. To this end, we develop an algorithm that generates a test sample distribution in which the learned model would be unfair. Also, we demonstrate by empirical experiments that the developed algorithm can generate the unfair test sample distribution against the existing fair learning algorithms.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.