12:40 PM - 1:00 PM
[4G2-GS-6-03] Proposal of a Reliability Evaluation Method Considering Consistency in Persona Estimation Using CoT
Keywords:Large Language Model, Persona, Evaluation Metrics
Research on inferring a person’s persona from their text data has been widely conducted. Recently, many methods utilizing large language model (LLM) have been proposed, with their effectiveness often evaluated based on inference accuracy against labeled datasets. However, real-world text data is rarely directly linked to the author’s personality traits. Moreover, unlike QA or arithmetic tasks, obtaining correct answers through external tools such as dictionaries or calculators is inherently difficult in persona inference. Therefore, reliable metrics for evaluating persona inference in real-world scenarios are needed. This study proposes a method to measure inference consistency by incrementally segmenting text data and using these segments as inputs. Experimental results suggest that the proposed method provides a useful indicator of inference reliability. This work contributes to advancing the analysis of LLM-based persona inference and verifying its applicability to real-world settings.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.