[2Win5-59] Exploring Fairness Across Fine-Grained Attributes in Large Vision-Language Models
Keywords:Large Vision-Language Model, Fairness, Bias
As Large Vision-Language Models (LVLMs), such as GPT-4o, become increasingly prevalent, concerns regarding their fairness have emerged. However, existing research has predominantly focused on demographic attributes such as race and gender, overlooking potential biases across a broader range of attributes.
To bridge this gap, this study analyzes the fairness of LVLMs across finer-grained attributes by constructing a knowledge base of open-set bias attributes using a large language model. Our experiments reveal that LVLMs exhibit biased outputs for various attributes that have not been previously examined. These findings highlight the need to expand bias analysis beyond conventional demographic categories and provide new insights for enhancing the comprehensive fairness of LVLMs.
To bridge this gap, this study analyzes the fairness of LVLMs across finer-grained attributes by constructing a knowledge base of open-set bias attributes using a large language model. Our experiments reveal that LVLMs exhibit biased outputs for various attributes that have not been previously examined. These findings highlight the need to expand bias analysis beyond conventional demographic categories and provide new insights for enhancing the comprehensive fairness of LVLMs.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.