5:20 PM - 5:40 PM
[1P5-OS-16b-02] Toward Stakeholder-in-the-Loop Fair and Explainable Decision Makings
[[Online]]
Keywords:AI, AI Ethics, Explainability, Fairness, Stakeholder-in-the-Loop
Because of the opacity involved in machine learning techniques, it is necessary to ensure accountability and fairness when machine learning is used in decision-support systems in government and business. The requirements for accountability and fairness depend on the values of the stakeholders affected by the system's decision-making. However, there is a lack of discussion on the appropriate outputs for each stakeholder. This paper proposes a framework for ``Stakeholder-in-the-Loop Fair Decisions'' to determine accountability and fairness requirements and discusses how to consider appropriate outputs for the four stakeholders. In addition, as an example of our efforts to extract the diverse values of stakeholders and integrate them into an output that all stakeholders agree on, we introduce our empirical study of stakeholders in job-matching AI through a crowdsourcing experiment. To ensure the accountability and fairness of job-matching AI, we explore the possibility of a system that numerically extracts stakeholders' values through questionnaires and explains who benefits/loses in the integrated output in the experiment.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.