[3Xin4-51] Explainable Data Bias Mitigation
Keywords:AI ethics, AI fairness, AI explainability
We propose explainable fairness method that can not only mitigate data bias but also make humans understand the reason. Machine learning algorithms have high-risk use cases, such as hiring and loan decision, that demand fairness, accountability, and transparency. Differences in AI predictions due to sensitive attributes, such as gender, race, and age, have become an issue of fairness. Although various methods of bias mitigation in AI have been proposed, there is a problem that conventional methods do not allow humans to intuitively understand on what basis data bias mitigation was performed. Therefore, we propose a method of bias mitigation using explainable AI. Our proposed method allows humans to understand how bias mitigation is achieved. Experimental results of applying this method to credit scoring by German Credit dataset shows that the statistical parity difference improved from -0.108 to -0.004 on gender.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.