6:40 PM - 7:00 PM
[2E6-GS-3-05] Common Space Learning with Probability Distributions for Multi-Modal Knowledge Graph
[[Online]]
Keywords:knowledge graph, entity alignment, multi-modal learning
Knowledge graphs are knowledge representations that focus on relationships among objects. They have been used for question answering systems and information retrieval. As datasets become larger and leverage multi-modal representations, it becomes more important to complement missing or insufficient information in a single knowledge graph using other knowledge graphs with additional information such as images and attribute values. The entity alignment is a task of finding objects with the same object in different knowledge graphs, and multi-modal entity alignment (MMEA) has been proposed for the entity alignment of multi-modal knowledge graphs. However, MMEA does not take into account well the granularity of each piece of information since it represents each piece of information obtained from images, relations, and attribute values by a single point in a common space. In this study, we propose a new method that expresses the granularity of each piece of information as the spread of a distribution. The proposed method outperforms MMEA in the entity alignment task of two multimodal knowledge graphs.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.