3.8 Proceedings Paper

TrackFormer: Multi-Object Tracking with Transformers

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00864

Keywords

-

Ask authors/readers for more resources

This study proposes an end-to-end trainable multi-object tracking approach called TrackFormer, based on an encoder-decoder Transformer architecture. TrackFormer achieves outstanding performance in track initialization, identity, and spatio-temporal trajectory reasoning, and introduces the attention mechanism. Through self- and encoder-decoder attention on global frame-level features, additional graph optimization or modeling of motion and/or appearance is omitted.
The challenging task of multi-object tracking (MOT) requires simultaneous reasoning about track initialization, identity, and spatio-temporal trajectories. We formulate this task as a frame-to-frame set prediction problem and introduce TrackFormer, an end-to-end trainable MOT approach based on an encoder-decoder Transformer architecture. Our model achieves data association between frames via attention by evolving a set of track predictions through a video sequence. The Transformer decoder initializes new tracks from static object queries and autoregressively follows existing tracks in space and time with the conceptually new and identity preserving track queries. Both query types benefit from self- and encoder-decoder attention on global frame-level features, thereby omitting any additional graph optimization or modeling of motion and/or appearance. TrackFormer introduces a new tracking-by-attention paradigm and while simple in its design is able to achieve state-of-the-art performance on the task of multi-object tracking (MOT17) and segmentation (MOTS20). The code is available at https:///github.com/timmeinhardt/trackformer

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available