[4Yin2-06] Fair Learning from Crowds
Keywords:machine learning, crowdsourcing, fairness
With the recent development of artificial intelligence (AI), along with the spread of AI technology to various areas of society, various ethical issues have come to be pointed out. In response, for example, there has been a lot of research on how to prevent machine learning prediction models from making unfair predictions. The use of crowdsourcing is also popular as a relatively easy way to collect data for use in machine learning. Here too, based on the aforementioned awareness of the problem, efforts are being made to create non-discriminatory teacher labels by removing and integrating biases caused by the prejudices of crowd workers from the labels. However, when considering the actual construction of a fair prediction model using crowdsourcing, the ultimate goal is not just to create fair teacher labels, but also to build a fair prediction model. Therefore, we propose a method to directly learn an unbiased prediction model from crowdsourced labels. We aim to improve the prediction accuracy while satisfying fairness by setting fairness criteria and introducing fairness constraints into the learning in the form of regularization terms. Evaluation experiments show that the proposed method achieves high fairness with little sacrifice in accuracy compared to existing methods that do not take fairness into account.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.