JSAI2025

Presentation information

Organized Session

Organized Session » OS-10

[3H4-OS-10b] OS-10

Thu. May 29, 2025 1:40 PM - 3:00 PM Room H (Room 1003)

オーガナイザ:岩見 真吾(名古屋大学),藤生 克仁(東京大学),中村 己貴子(中外製薬),岡本 有司(京都大学),小島 諒介(京都大学),川上 英良(千葉大学),本田 直樹(名古屋大学)

2:40 PM - 3:00 PM

[3H4-OS-10b-04] Fine-tuning Large Language Model with Epilepsy Medical Knowledge

〇Xuyang ZHAO1,2, Qibin Zhao3, Toshihisa Tanaka4 (1. RIKEN Information R&D and Strategy Headquarters, 2. Chiba University, 3. RIKEN Center for Advanced Intelligence Project, 4. Tokyo University of Agriculture and Technology)

Keywords:Large language model, Epilepsy

The large language model (LLM) has demonstrated their powerful performance in a variety of fields. To further improve the performance of a LLM in a specific field, fine-tuning is a common method. LLM in the medical field is often fine-tuned using general medical knowledge to improve performance, but when the model is faced with a specific disease, the model responses are not completely accurate and can sometimes be completely irrelevant. In this work, we focus on a specific disease (epilepsy), fine-tuning the pre-trained model using data from the epilepsy field. The epilepsy data includes the basic knowledge of the disease, conventional treatment plans, and commonly used drugs, as well as precautions in daily life, etc. In the experiment, a variety of evaluation methods are used to compare the fine-tuned model with the pre-trained model. From the results, the performance of the fine-tuned model has been greatly improved.

Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password