3:00 PM - 3:20 PM
[4E3-OS-11c-05] A Metrics of Interaction Behavior Dataset Coverage for Learning Subjective Quality of Human-Robot Interaction
Keywords:Human-Robot Interaction, Evaluation Criterion, Virtual Reality, Robot Competition
The aim of this study is to propose a reasonable metric of the coverage of interaction behavior datasets for estimating the subjectively evaluated quality of human-robot interaction. The proposed metric quantifies the data coverage based on the extent to which the evaluation criterion formulated with the collected data can approximate the human subjective evaluation results. Using the three types of datasets collected with different interaction styles, we compared the proposed metric and Kullback-Leibler (KL) divergence. From the results, we demonstrate that the proposed metric can reasonably explain the coverage of interaction datasets even in the case the coverage is not reasonably explainable with KL divergence. The contribution of this presentation is (1) to make a reasonable metric of coverage of HRI datasets for estimating the subjective QoI and (2) to clarify that the evaluation criteria for approximating subjective scores are improved by a wide coverage interaction dataset.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.