16:40 〜 17:00
[2N4-IS-2c-05] Performance Evaluation of Japanese BERT Model for Intent Classification Using a Chatbot
キーワード:Natural Language Processing, Pretrained Language Model, BERT
The recent development of natural language processing technology using deep learning has been remarkable. BERT, developed by Google, and GPT, developed by the OpenAI Foundation, have contributed to this development.
In this experiment, we compared the performance of the Japanese BERT model, one of the latest natural language processing technologies, with Word2Vec, one of the conventional methods.
We used data from the LiveDoor news corpus for the experiments. We also built a FAQ chatbot and compared the rate of correct answers to questions about news articles asked by users between BERT and Word2Vec.
In our experiments, BERT showed superior performance compared to Word2Vec. We were also able to obtain specific insights into the factors that contributed to the performance of BERT, and were able to objectively evaluate the performance of the Japanese BERT model.
In this experiment, we compared the performance of the Japanese BERT model, one of the latest natural language processing technologies, with Word2Vec, one of the conventional methods.
We used data from the LiveDoor news corpus for the experiments. We also built a FAQ chatbot and compared the rate of correct answers to questions about news articles asked by users between BERT and Word2Vec.
In our experiments, BERT showed superior performance compared to Word2Vec. We were also able to obtain specific insights into the factors that contributed to the performance of BERT, and were able to objectively evaluate the performance of the Japanese BERT model.
講演PDFパスワード認証
論文PDFの閲覧にはログインが必要です。参加登録者の方は「参加者用ログイン」画面からログインしてください。あるいは論文PDF閲覧用のパスワードを以下にご入力ください。