Presentation information

Organized Session

Organized Session » OS-13

[2Q4-OS-13a] OS-13 (1)

Wed. Jun 10, 2020 1:50 PM - 2:50 PM Room Q (jsai2020online-17)

福住 伸一(理化学研究所)、佐倉 統(東京大学)、松田 雄馬(合同会社アイキュベータ)

1:50 PM - 2:10 PM

[2Q4-OS-13a-01] Can AI Discriminate in a Morally Bad Way?

Consideration on the Case of COMPAS

〇Haruka Maeda1,2 (1. Graduate School of the University of Tokyo, 2. RIKEN Center for Advanced Intelligence Project)

Keywords:discrimination, COMPAS, criminal risk assessment, AI, algorithm

The aim of this paper is to explain how algorithms can morally discriminate against humans. Discrimination by algorithmic systems has become an issue in many ethical guidelines of Artificial Intelligence. However, these guidelines do not mention the nature of the discrimination and its badness. In addition, the existing theory of discrimination supposes that the individual is a responsible subject. Taking into consideration that machine learning (a type of algorithm) is known for its unpredictable behavior, the approach to conceive the algorithms as a subject can serve us. Hence, I provide an analysis of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) case using Hellman’s account, which can detect the badness of discrimination from the actor’s behavior. COMPAS is a typical example of unintended automated discrimination program. This provides a way to detect the degree of bias in discrimination based on the decision of algorithms.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.