JSAI2025

Presentation information

Organized Session

Organized Session » OS-47

[1M3-OS-47a] OS-47

Tue. May 27, 2025 1:40 PM - 3:20 PM Room M (Room 1008)

オーガナイザ:熊谷 雄介(博報堂DYホールディングス),森 正弥(博報堂DYホールディングス),平手 勇宇(楽天グループ),益子 宗(芝浦工業大学),川原 圭博(東京大学)

3:00 PM - 3:20 PM

[1M3-OS-47a-05] De-Tuning of Large Language Models

〇Koki Iwai1, Yusuke Kumagae1, Yukino Baba2 (1. Hakuhodo DY Holdings Inc., 2. The University of Tokyo)

Keywords:Large Language Model, Benchmark, Persona, Role-Playing

Large Language Models (LLMs) possess the ability to perform well on unknown tasks and flexibly alter their behavior according to prompts. Leveraging this characteristic, there are attempts to assign virtual personas or personalities to LLMs and make them behave accordingly. If we could intentionally limit LLM performance, the constructed virtual personas would likely become more realistic (e.g., making a kindergartener unable to solve integral calculus). This paper addresses such intentional performance degradation of LLMs. Using multiple Japanese benchmark tasks, we report that it is difficult to degrade LLM performance in downstream tasks through prompts alone. We also examine the benchmarks necessary for measuring performance degradation.

Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password