JSAI2025

Presentation information

Organized Session

Organized Session » OS-5

[3Q1-OS-5] OS-5

Thu. May 29, 2025 9:00 AM - 10:40 AM Room Q (Room 804)

オーガナイザ:渡辺 修平(リコー),菅野 太郎(東京大学),松尾 豊(東京大学),草彅 真人(リコー),原田 亨(リコー),小泉 光司(Brunel University London)

9:20 AM - 9:40 AM

[3Q1-OS-5-02] Utilizing Large Language Models for the Analysis of Meeting Utterance Data

〇Haruki Kitagawa1, Taro Kanno1, Yingting Chen1, Yuta Yoshino2, Shuhei Watanabe2 (1. The university of Tokyo, 2. Ricoh Co., Ltd.)

Keywords:AI, large language models, meeting, verbal data analysis

Meetings serve as a representative opportunity to demonstrate creativity in business settings, particularly for generating ideas. Therefore, it is essential to identify the characteristics and skills in participants' behaviors that contribute to creativity in meetings. In the analysis of utterances, coding schemes are used to classify them(assign annotations), and frequency analysis of the results can quantitatively characterize speech. However, annotation task is manually performed, which presents challenges in terms of time and cost. To address these challenges, there is potential for utilizing large language models (LLMs). However, previous studies using LLMs for annotation tasks have been limited, particularly in classifying conversational data like meeting discussions. This study aims to investigate the feasibility of using ChatGPT, a type of LLM, for annotation in analysis of meeting utterance data to improve efficiency while maintaining accuracy. The study compared the coding performance of three patterns: two humans, one human and ChatGPT, and ChatGPT alone. The results revealed that a method in which ChatGPT's annotations are corrected by humans can maintain a certain level of accuracy while reducing working time by 70%.

Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password