JSAI2023

Presentation information

General Session

General Session » GS-11 AI and Society

[3L5-GS-11] AI and Society

Thu. Jun 8, 2023 3:30 PM - 5:10 PM Room L (C2)

座長:白川 真一(横浜国立大学) [現地]

4:10 PM - 4:30 PM

[3L5-GS-11-03] Uncertainty and Explanation-based Human Debugging of Text Classification Model

〇Masato Ota1, Faisal Hadiputra1 (1. Information Services International-Dentsu, Ltd.)

Keywords:Uncertainty, Explainable AI, Human in the Loop, Text Classification

AI democratization is advancing quickly through the availability of NLP pre-trained models. As a consequence, data scientists, as well as subject matter experts (SME), are moving towards using data-driven AI products to solve their problems. Utilizing these products requires NLP model understanding and continuous accuracy improvement, skills only data scientists have. However, data scientists are not always involved. Establishing a flow that allows SMEs to improve model accuracy independently is essential. Therefore, we focus on debugging NLP models via human feedback, an approach addressed in Explainable AI. Humans provide feedback to the system based on the model explanation. The feedback can be varied, such as grouping similar samples or correcting invalid explanations. In our case, we aim to improve accuracy by domain-knowledge-aware data augmentation. In this study, we propose an efficient way to reduce the cost of manual data augmentation by exploiting uncertainties. We experimented with text classification tasks and verified that human feedback effectively improves model accuracy, and introducing uncertainties speeds up the augmentation and improves the data quality.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password