10:20 AM - 10:40 AM
[4E1-OS-12a-05] Habitus-based Preference Evaluation of LLM Agents
Keywords:LLM Agent, Habitus, AI Alignment
LLM agents, which enhance LLMs with memory and behavioral modules for autonomous behavior, are gaining attention. AI alignment ensures AI adheres to human values, but agents should also autonomously adjust their values based on their environment. In society, values and behaviors are shaped by *habitus*—unconscious preferences formed through experience. Sociologist Bourdieu classified preferences into "naive" (practical) and "pure" (aesthetic). These preferences are closely tied to occupations. This study explores how agents' behaviors and experiences in a virtual environment shape their values. Six agents with different occupations lived in a virtual world, and interviews about their hobbies and preferences revealed that preferences evolved beyond occupational biases. The findings suggest that AI agents, like humans, can develop new values through experience, highlighting the importance of incorporating habitus into AI design to create more adaptive, human-like agents.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.