JSAI2025

Presentation information

Organized Session

Organized Session » OS-34

[1L3-OS-34] OS-34

Tue. May 27, 2025 1:40 PM - 3:20 PM Room L (Room 1007)

オーガナイザ:林 祐輔(AIアライメントネットワーク),坂本 航太郎(東京大学),和地 瞭良(LINEヤフー),阿部 拳之(サイバーエージェント),森村 哲郎(サイバーエージェント)

2:20 PM - 2:40 PM

[1L3-OS-34-03] Learning in Periodic Zero-Sum Games

Synchronization Triggers Divergence from Nash equilibrium

〇Yuma Fujimoto1,2,3, Kaito Ariu1, Kenshi Abe1,4 (1. CyberAgent, 2. University of Tokyo, 3. Soken University, 4. University of Electro-Communications)

[[Online]]

Keywords:Multi-Agent Learning, Learning in games, Nash equilibrium

Learning in zero-sum games studies a situation where multiple agents competitively learn their strategy. In such multi-agent learning, we often see that the strategies cycle around their optimum, i.e., Nash equilibrium. When a game periodically varies (called a "periodic" game), however, the Nash equilibrium moves generically. How learning dynamics behave in such periodic games is of interest but still unclear. Interestingly, we discover that the behavior is highly dependent on the relationship between the two speeds at which the game changes and at which players learn. We observe that when these two speeds synchronize, the learning dynamics diverge, and their time-average does not converge. Otherwise, the learning dynamics draw complicated cycles, but their time-average converges. Under some assumptions introduced for the dynamical systems analysis, we prove that this behavior occurs. Furthermore, our experiments observe this behavior even if these assumptions are removed. This study discovers a novel phenomenon, i.e., synchronization, and gains insight widely applicable to learning in periodic games.

Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password