4:00 PM - 4:20 PM
[3F5-ES-2-02] An Evaluation Method for Attention-based Dialog System
Keywords:evaluation metric, dialog system, Contextualised embedding
Dialog systems are embedded in smartphones and Artificial Intelligence (AI) speakers and are widely used through text and speech. To achieve a human-like dialog system, one of the challenges is to have a standard automatic evaluation metric. Existing metrics like BLEU, METEOR, and ROUGE have been proposed to evaluate dialog system. However, those methods are biased and correlate very poorly with human judgements of response quality. On the other hand, RUBER is applied to not only train the relatedness between the dialog system generated reply and given query, but also measure the similarity between the ground truth and generated reply. It showed higher correlation with human judgements than BLEU and ROUGE. Based on RUBER, instead of static embedding, we explore using BERT contextualised word embedding to get a better evaluation metrics. The experiment results showed that our evaluation metrics using BERT are more correlated to human judgement than RUBER.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.