10:00 AM - 10:20 AM
[4Q1-IS-2c-04] Human-Aligned Topic Model for Explanations of Image Classification
Keywords:Representation learning, Human- centered computing , Explainable AI
Despite significant research efforts to integrate human judgment to improve model interpretability, there is a continued need to enhance the efficiency of evaluation algorithms in this domain. It's important to note that human perceptions may not consistently align with dataset labels. Therefore, we developed a topic model architecture to address this discrepancy. While topic modeling is commonly associated with language models, we introduced a contrastive topic modeling approach on clustering results of human-annotated images. Semi-supervised clustering incorporates must-link constraints for similar items and cannot-link constraints for dissimilar items, which are provided by humans. Our method aligns image patches clustering with the similarity measurement between prototypes and dataset samples in the model during training. It ensures that the deep neural network, while predicting images, transfers human knowledge from a multi-semantic topic derived from the clustering result to individual samples. This process generates intrinsic global topic explanations, illuminating salient image features and capturing both positive and negative relations. Our experimental results achieve highly competitive outcomes and signify direct visual concept examples for ease of understanding.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.