10:20 AM - 10:40 AM
[4G1-OS-4a-02] Learning Concept-based Explainable Model that Guarantees the False Discovery Rate
Keywords:Explainable AI, Statistical Significance, Variational Auto-Encoder
Explain deep learning model by concept is a common method in interpretability of deep learning.However,we can't guarantee that all concepts will be important for the prediction.In this study, we propose a method to select the concepts important for prediction under a certain false discovery rate (FDR). Our method uses latent variables acquired by Variational Autoencoder(VAE) to represent the concepts and use a variable selection tool called Knockoffs to find the statistical significant concepts. In our experiments, we use multiple datasets to show that the concepts selected by the proposed method are interpretable. It also can achieve high accuracy even when the predictions are made only by these selected concepts.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.