JSAI2023

Presentation information

International Session

International Session » IS-1 Knowledge engineering

[2U6-IS-1c] Knowledge engineering

Wed. Jun 7, 2023 5:30 PM - 6:50 PM Room U (Online)

Chair: Akinori Abe (Chiba university)

6:30 PM - 6:50 PM

[2U6-IS-1c-04] Adversarial Self-attention Misdirection

Improving vision transformers performance with adversarial pre-training

〇Luiz Henrique Mormille1, Masayasu Atsumi1 (1. Soka Univ.)

[[Online, Working-in-progress]]

Keywords:Vision Transformers, Adversarial Learning, Self-attention

In recent years, the Transformer achieved remarkable results in computer vision related tasks, matching, or even surpassing those of convolutional neural networks. However, to achieve state-of-the-art results, vision transformers rely on large architectures and extensive pre-training on very large datasets. One of the main reasons for this limitation is the fact that vision transformers, whose core is its global self-attention computation, inherently lack inductive biases, with solutions often converging on a local minimum. This work presents a new method to pre-train vision transformers, denoted self-attention misdirection. In this pre-training method, an adversarial U-Net like network pre-processes the input images, altering them with the goal of misdirecting the self-attention computation process in the vision transformer. It uses style representations of image patches to generate inputs that are difficult for self-attention learning, leading the vision transformer to learn representations that generalize better on unseen data.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password