[3Win5-12] Adversarial Benchmark for Evaluating Stereotypes in Japanese Culture
Keywords:NLP, LLM, Fairness
In bias evaluation of large language models (LLMs), non-English-speaking regions often rely on translated English datasets. However, such translated datasets are based on Western cultural norms and fail to fully reflect the ethical values and social norms of different cultural contexts. In this study, we construct an adversarial benchmark, JUBAKU, designed to evaluate bias specific to Japanese culture. We manually create dialogue data to elicit biases in LLMs and assess nine Japanese LLMs using JUBAKU. The results show that all models performed worse than the random baseline, revealing their vulnerability to biases unique to Japanese culture.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.