Logo Logo
Hilfe
Hilfe
Switch Language to English

Wu, Boxi; Gu, Jindong; Li, Zhifeng; Cai, Deng; He, Xiaofei und Liu, Wei (2022): Towards Efficient Adversarial Training on Vision Transformers. 17th European Conference on Computer Vision (ECCV 2022), Tel Aviv, Israel, October 23–27, 2022. In: Computer Vision – ECCV 2022. 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIII, Lecture Notes in Computer Science Bd. 13673 Cham: Springer. S. 307-325

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Vision Transformer (ViT), as a powerful alternative to Convolutional Neural Network (CNN), has received much attention. Recent work showed that ViTs are also vulnerable to adversarial examples like CNNs. To build robust ViTs, an intuitive way is to apply adversarial training since it has been shown as one of the most effective ways to accomplish robust CNNs. However, one major limitation of adversarial training is its heavy computational cost. The self-attention mechanism adopted by ViTs is a computationally intense operation whose expense increases quadratically with the number of input patches, making adversarial training on ViTs even more time-consuming. In this work, we first comprehensively study fast adversarial training on a variety of vision transformers and illustrate the relationship between the efficiency and robustness. Then, to expediate adversarial training on ViTs, we propose an efficient Attention Guided Adversarial Training mechanism. Specifically, relying on the specialty of self-attention, we actively remove certain patch embeddings of each layer with an attention-guided dropping strategy during adversarial training. The slimmed self-attention modules accelerate the adversarial training on ViTs significantly. With only 65% of the fast adversarial training time, we match the state-of-the-art results on the challenging ImageNet benchmark.

Dokument bearbeiten Dokument bearbeiten