3.8 Proceedings Paper

Inpainting Transformer for Anomaly Detection

Journal

IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT II
Volume 13232, Issue -, Pages 394-406

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-06430-2_33

Keywords

Anomaly detection; Self-attention; Transformer

Ask authors/readers for more resources

Anomaly detection in computer vision aims to identify images that deviate from normal images. This paper proposes a patch-inpainting approach based on self-attention for anomaly detection. By inpainting covered patches in a sequence of image patches, the proposed method integrates information across large regions of the image.
Anomaly detection in computer vision is the task of identifying images which deviate from a set of normal images. A common approach is to train deep convolutional autoencoders to inpaint covered parts of an image and compare the output with the original image. By training on anomaly-free samples only, the model is assumed to not being able to reconstruct anomalous regions properly. For anomaly detection by inpainting we suggest it to be beneficial to incorporate information from potentially distant regions. In particular we pose anomaly detection as a patch-inpainting problem and propose to solve it with a purely self-attention based approach discarding convolutions. The proposed Inpainting Transformer (InTra) is trained to inpaint covered patches in a large sequence of image patches, thereby integrating information across large regions of the input image. When training from scratch, in comparison to other methods not using extra training data, InTra achieves results on par with the current state-of-the-art on the MVTec AD dataset for detection and segmentation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available