JSAI2023

Presentation information

Poster Session

General Session » Poster session

[3Xin4] Poster session 1

Thu. Jun 8, 2023 1:30 PM - 3:10 PM Room X (Exhibition hall B)

[3Xin4-70] Encoder-Decoder Language Models for Graphs

〇Yui Uehara1, Tatsuya Ishigaki1, Yusuke Miyao2,1, Ichiro Kobayashi3,1, Hiroya Takamura1 (1.National Institute of Advanced Industrial Science and Technology (AIST), 2.The University of Tokyo, 3.Ochanomizu University)

Keywords:Pretrained Language Models, Graphs, Text Generation

Graph-to-text tasks, in which sentences are generated from graphs, have been considered challenging because of the complex structure of the input. Recently, however, it has been pointed out that the use of large-scale pre-trained Encoder-Decoder language models, such as T5 and BART, can be extremely effective. On the other hand, to use such pre-trained language models to perform graph-to-text tasks, it is necessary to convert the input graph into a text sequence in some way, but this currently relies on heuristics found individually for each task. Therefore, we propose an encoder-decoder language model for graph-to-text with an encoder that directly handles graph structures. We evaluate the proposed model on existing datasets that handle graph-to-text tasks, and discuss the advantages and the challenges of the proposed model. We also examine the effect of additional pre-training using a pseudo-dataset automatically generated from a large graph dataset.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password