[1Win4-102] Cooperative Behavior in Large Language Models and humans
Keywords:LLM, Human-AI cooporation, AI alignment
With the rapid development of Large Language Models (LLMs), AI safety and AI alignment are becoming increasingly important. We need to elucidate the appropriate and robust mechanisms and principles of cooperation between the two in human-AI cooperation (Human-AI Cooperation). We evaluate and clarify, both empirically and game-theoretically, how the cooperation behavior of LLMs and humans differs. The stag hunt game is one of the cooperative games designed to increase rewards when players cooperate with each other. Subjects assumed that the opponent and the subject himself had two pure Nash equilibrium alternatives, (deer, deer) and (rabbit, rabbit). Based on the stag hunt game, we compared and evaluated how the strategies taken by the subjects changed when the opponent player was human, LLM, etc. Uncertainty about whether the subject's opponent would take a co-operative action was also expressed and evaluated as a confidence level.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.