4.8 Article

Ranked List Loss for Deep Metric Learning

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3068449

Keywords

Measurement; Training; Shape; Image retrieval; Extraterrestrial measurements; Task analysis; Pattern analysis; Deep metric learning; discriminative representation learning; learning to rank; information retrieval

Funding

  1. AnyVision Industrial Research Funding

Ask authors/readers for more resources

The objective of deep metric learning is to learn embeddings that can capture semantic similarity and dissimilarity among data points. Existing methods suffer from slow convergence due to trivial pairs or triplets in the loss functions. To address this, ranking-motivated structured losses have been proposed, but they have some limitations. In this work, a novel ranked list loss is proposed to overcome these limitations and achieve state-of-the-art performance on fine-grained image retrieval task.
The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the gallery to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods on the fine-grained image retrieval task. Our source code is available online: https://github.com/XinshaoAmosWang/Ranked-List-Loss-for-DML.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available