1:50 PM - 2:10 PM
[2Q4-OS-13a-01] Can AI Discriminate in a Morally Bad Way?
Consideration on the Case of COMPAS
Keywords:discrimination, COMPAS, criminal risk assessment, AI, algorithm
The aim of this paper is to explain how algorithms can morally discriminate against humans. Discrimination by algorithmic systems has become an issue in many ethical guidelines of Artificial Intelligence. However, these guidelines do not mention the nature of the discrimination and its badness. In addition, the existing theory of discrimination supposes that the individual is a responsible subject. Taking into consideration that machine learning (a type of algorithm) is known for its unpredictable behavior, the approach to conceive the algorithms as a subject can serve us. Hence, I provide an analysis of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) case using Hellman’s account, which can detect the badness of discrimination from the actor’s behavior. COMPAS is a typical example of unintended automated discrimination program. This provides a way to detect the degree of bias in discrimination based on the decision of algorithms.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.