2:00 PM - 2:20 PM
[1G3-GS-1-04] Category Theory Perspective on Deep Learning that Preserves Mathematical Structures
Keywords:geometric deep learning, deep physics modeling, categorical theory, group theory
The main reason why deep learning works well on real-world data is that it is designed to preserve known mathematical structures of target systems, rather than that it is a universal approximator. For example, convolutional neural networks are designed to be translation invariant (i.e., symmetric to translation), which means that the extracted features does not depend on the position of the object in the image. A similar goes for graph neural networks, which is permutation invariant. Neural networks with symmetry are called geometric deep learning in recent years, and are being interpreted as natural transformations in category theory. I demonstrate that neural networks that learn the dynamics of physical phenomena, deep physical models, can also be interpreted as natural transformations. Likewise, I introduce deep learning designed based on mathematical structures and discuss its interpretation from the viewpoint of category theory.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.