4.7 Article

FairMOT: On the Fairness of Detection and Re-identification in Multiple Object Tracking

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 129, Issue 11, Pages 3069-3087

Publisher

SPRINGER
DOI: 10.1007/s11263-021-01513-4

Keywords

FairMOT; Multi-object tracking; One-shot; Anchor-free; Real-time inference

Funding

  1. NSFC [61733007, 61876212]
  2. MSRA Collaborative Research Fund

Ask authors/readers for more resources

Multi-object tracking is a crucial problem in computer vision, and formulating it as multi-task learning of object detection and re-ID in a single network can lead to joint optimization of the two tasks. However, competition between the tasks needs to be addressed, and the proposed FairMOT method based on CenterNet architecture achieves high accuracy for both detection and tracking through detailed designs and empirical studies.
Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-1D in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-1D as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not fair to the re-1D task. To solve the problem, we present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at https://github. com/i fzhang/FairMOT.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available