1:40 PM - 2:00 PM
[2E4-OS-4a-01] Explicability as an AI Principle: Technology and Ethics in Cooperation
Keywords:AI Ethics, Explicability, Explainability, AI Strategy
This paper categorizes current approaches to AI ethics into four perspectives and briefly summarizes them: (1) Case studies and technical trend surveys, (2) AI governance, (3) Technologies for AI alignment, (4) Philosophy. In the second half, we focus on the fourth perspective, the philosophical approach, within the context of applied ethics. In particular, the explicability of AI may be an area in which scientists, engineers, and AI developers are expected to engage more actively relative to other ethical issues in AI. We propose four fundamental elements to improve AI intelligibility and interpretability: "I/O," "Constraints," "Objectives," and "Architecture." Furthermore, we discuss how the relationship between AI designers' objectives and users' purposes is fundamentally connected to the challenges of AI alignment.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.