6:20 PM - 6:40 PM
[1K5-OS-11b-04] What's wrong with treating a large language model as an agent?
[[Online]]
Keywords:AI ethics, large language models, LaMDA, agency, fictophilia
June 2022, a Google engineer claimed that their Language Model for Dialoguel Applications (LaMDA) is sentient and deserves to be treated as an agent. Google rejected the claim and many supported the decision. There are several reasons to contest the ascription of agency to large language models (LLMs). (1) Intrinsic reasons. LLMs cannot be conscious or intentional. (2) Consequential reasons. Treating LLMs as agents can lead to diverting public attention away from more important issues. (3) Reasons concerning individual well-being. Treating LLMs as agents can aggravate an individual’s social isolation. I examine each consideration in turn and argue that it’s harder than one might think to decisively conclude that we should not treat LLMs as agents. Drawing on extant debates on the moral status of robots and fictophilia (love for fictional characters), I will also specify the key issues for considering the legitimacy of agency ascription to LLMs.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.