2:20 PM - 2:40 PM
[4G3-OS-4b-02] A Study on Purpose-attributing Explaination in Explainable AI
[[Online]]
Keywords:Explainable AI
The growing need for interpretability of artificial intelligence(AI)-based systems has led to research on explainable AI systems, which explain their behavior by themselves. Previous research has mainly focused on the kind of explanation that describes the triggering cause and underlying mechanism of a given behavior. However, given the expected benefits of explainable AI, it seems worth considering another kind of explanation: purpose-attributing explanation. The main purpose of this paper is to illustrate what is the purpose-attributing explanation of behavior and how important it is for the social acceptance of artifacts (including AI systems) from the viewpoint of philosophy of biology, mind, and artifact. First, we will see the expected benefits of explainable AI and previous research, next, illustrate the purpose-attributing explanation and its importance for the social acceptance of artifacts, and then consider how to identify the purpose or function of the behavior of AI systems.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.