9:20 AM - 9:40 AM
[2L1-GS-2-02] Adaptation to a question answering task using GPT-2
Keywords:Natural Language Processing, Pretrained Language Model, GPT-2
In recent years, there has been a remarkable development in natural language processing technology using deep learning algorithms, such as BERT developed by Google and the GPT-x series developed by the OpenAI Foundation. Nowadays, research is being conducted not only on simple tasks such as categorizing sentences, but also on generative tasks such as creating and summarizing sentences. In this experiment, we generated a pre-training model of GPT-2 and fine-tuned it to adapt to the question-answering task in order to verify whether GPT-2 can be applied to question-answering chatbots. For fine-tuning, we used the FAQ data of a life insurance company. As a result, we were able to obtain natural answers in about 80% of the test data and ideal answers in about 60%. We believe that this mechanism can be used to configure a question and answer system with a different approach from the rule-based system.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.