JSAI2024

Presentation information

Poster Session

Poster session » Poster session

[3Xin2] Poster session 1

Thu. May 30, 2024 11:00 AM - 12:40 PM Room X (Event hall 1)

[3Xin2-56] Investigation of Hallucination Detection Methods in Black Box Large Language Models

〇Asuka Yamazato1, Kohei Koyama1 (1.ARISE analytics. inc)

Keywords:Large Lunguage Model, Hallucination, Generative AI, sentence embeddings

Since the launch of ChatGPT, a conversational AI service by OpenAI, its underlying technology, generative AI, has been in the spotlight. One of the problems of generative AI is 'hallucination', a phenomenon where the AI generates content that deviates from actual facts. A method named SelfCheckGPT has been proposed to tackle this problem. This method detects hallucination based on the similarity of outputs, with the reasoning that if a Large Language Model (LLM) knows a given concept well, the responses will likely be similar and contain consistent facts, even when the same prompt is given multiple times.In this study, to verify the performance of SelfCheckGPT for Japanese, we constructed a quiz question answer dataset in Japanese using gpt-3.5-turbo and conducted experiments. The results showed that the performance significantly degraded for Japanese quiz questions. Analysis suggests that this might be due to the similar sentence structures in each output of gpt-3.5-turbo, which in turn depends on the method used for obtaining sentence embeddings.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password