2022年度 人工知能学会全国大会(第36回)

講演情報

国際セッション

国際セッション » ES-2 Machine learning

[3S3-IS-2e] Machine learning

2022年6月16日(木) 13:30 〜 14:50 S会場 (遠隔S)

Chair: Akinori Abe (Chiba University)

14:30 〜 14:50

[3S3-IS-2e-04] Robustifying Vision Transformer Without Retraining From Scratch Using Attention Based Test-Time Adaptation

〇Takeshi Kojima1, Yusuke Iwasawa1, Yutaka Matsuo1 (1. Graduate School of Engineering, The University of Tokyo)

Regular

キーワード:Vision Transformer, Test-Time Adaptation, Self-Attention

Vision Transformer (ViT) is becoming more and more popular in the field of image processing. This study aims to improve the robustness against the unknown perturbations without retraining the ViT model from scratch. Since our approach does not alter the training phase, it does not need to repeat computationally heavy pre-training of ViT. Specifically, we use test-time adaptation for this purpose, which corrects its prediction during test-time by itself. We first show the existing test-time adaptation method (Tent), which was only validated for CNN model, is also applicable by proper parameter tuning and gradient clipping. However, we observed Tent sometimes catastrophically fails, especially under severe perturbations. To stabilize the optimization, we propose a new loss function called Attent, which minimizes the distributional differences of the attention entropy between the source and target. Experiments of image classification task on CIFAR-10-C, CIFAR-100-C, and ImageNet-C show that both Tent and Attent are effective on a wide variety of corruptions. The results also show that by combining Attent and Tent, the classification accuracy on corrupted data is further improved.

講演PDFパスワード認証
論文PDFの閲覧にはログインが必要です。参加登録者の方は「参加者用ログイン」画面からログインしてください。あるいは論文PDF閲覧用のパスワードを以下にご入力ください。

パスワード