JSAI2024

Presentation information

Organized Session

Organized Session » OS-5

[2T6-OS-5c] OS-5

Wed. May 29, 2024 5:30 PM - 6:30 PM Room T (Room 62)

オーガナイザ:荒井 ひろみ(理研AIP)、小山 聡(名市大)、鹿島 久嗣(京大)、堤 瑛美子(東大)、森 純一郎(東大)

6:10 PM - 6:30 PM

[2T6-OS-5c-03] Privacy-Preserving Data Annotation Automated by Large Multimodal Model

〇Yuki Wakai1, Koh Takeuchi1, Hisashi Kashima1 (1. Kyoto University)

Keywords:Large Multimodal Model, Privacy Preservation, Data Annotation

In recent years, large multimodal models(LMMs) have demonstrated innovative performance in various tasks such as text analysis, transcription, and optical character recognition. On the other hand, the data usage policies of LMMs depend on developers and thus there is a potential risk that confidential information could be stored or used as training data. Various LMM applications have been explored in academia and industry, and there is a growing demand for technologies that utilize LMMs while preserving data privacy.
One such application of LMMs is the automation of data annotation tasks. Traditional data annotation is performed manually, requiring much time and cost. Also, the quality of the annotation is highly dependent on the individual abilities of each annotator. Therefore, LMMs have been expected to be faster and more accurate annotation resources.
In our study, we propose a framework that balances annotation accuracy and data privacy preservation in data annotation tasks. Our experiments employed LMMs for image annotation and showed a reduction in privacy leakage risks while maintaining annotation accuracy.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password