JSAI2020

Presentation information

General Session

General Session » J-2 Machine learning

[3E1-GS-2] Machine learning: Explainable AI (1)

Thu. Jun 11, 2020 9:00 AM - 10:40 AM Room E (jsai2020online-5)

座長:石畠正和(NTT)

10:00 AM - 10:20 AM

[3E1-GS-2-04] Local model explanation with linguistic approach

〇Takumi Yanagawa1, Fumihiko Terui1 (1. IBM Japan, Ltd)

Keywords:explainability, machine learning, text classification

Machine Learning (ML) technology has been applied to many applications and the importance of ML model interpretability has been increasing. There is a way to explain ML models called Local Interpretable Model-agnostic Explanation (LIME). LIME is to make a ML model explainable by creating a human interpretable surrogate model in the local vicinity of the input data. In the case of ML models incorporating Natural Language tasks, however, there are some difficulties on the definition of the surrogate model and the local vicinity to explain complicated models that can understand linguistic effects. In this paper, we introduce a new method focusing on functional words and the effects by word permutation. We took experiments with this new method using a sentiment analysis model and the results show that the effects of functional words are properly explained. Furthermore, we will show that this method can be applied with functional word estimation.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password