Presentation information

Organized Session

Organized Session » OS-4

[4G1-OS-4a] 人工知能におけるプライバシー,公平性,説明責任,透明性への学際的アプローチ(1/2)

Fri. Jun 17, 2022 10:00 AM - 11:20 AM Room G (Room G)

オーガナイザ:福地 一斗(筑波大学)[現地]、荒井 ひろみ(理研)、工藤 郁子(大阪大学)

10:20 AM - 10:40 AM

[4G1-OS-4a-02] Learning Concept-based Explainable Model that Guarantees the False Discovery Rate

〇Kaiwen Xu1,2, Kazuto Fukuchi1,2, Youhei Akimoto1,2, Jun Sakuma1,2 (1. University of Tsukuba, 2. RIKEN Center for Advanced Intelligence Project)

Keywords:Explainable AI, Statistical Significance, Variational Auto-Encoder

Explain deep learning model by concept is a common method in interpretability of deep learning.However,we can't guarantee that all concepts will be important for the prediction.In this study, we propose a method to select the concepts important for prediction under a certain false discovery rate (FDR). Our method uses latent variables acquired by Variational Autoencoder(VAE) to represent the concepts and use a variable selection tool called Knockoffs to find the statistical significant concepts. In our experiments, we use multiple datasets to show that the concepts selected by the proposed method are interpretable. It also can achieve high accuracy even when the predictions are made only by these selected concepts.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.