10:20 AM - 10:40 AM
[3S1-GS-2-05] Analyzing the Complexity and Hierarchy of Latent Representations in LLM
Keywords:LLM, Representation Learning, Interpretability, Intrinsic Dimension, δ-hyperbolicity
Large Language Models (LLMs) have been rapidly evolving and are being utilized in various practical applications. However, many aspects of their operational principles remain unclear. In this study, we analyze the distribution of latent representations in LLMs to gain a deeper understanding of their reasoning process. As analytical methods, we employ intrinsic dimension, which captures the essential dimensionality of a distribution, and δ-hyperbolicity, which measures the hierarchical structure of the distribution. Our experimental results provide insights into the complexity and hierarchy within the reasoning process of LLMs, shedding light on how they internally handle the semantics of natural language. This study contributes not only to improving the interpretability of LLMs but also to enhancing their architectural design.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.