3.8 Proceedings Paper

Recall@k Surrogate Loss with Large Batches and Similarity Mixup

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00735

关键词

-

资金

  1. OP VVV [CZ.02.1.01/0.0/0.0/16019/0000765]
  2. Grant Agency of the Czech Technical University in Prague [SGS20/171/OHK3/3T/13]
  3. Project StratDL in the realm of COMET K1 center Software Competence Center Hagenberg, Amazon Research Award
  4. Junior Star GACR [GM 21-28830M]

向作者/读者索取更多资源

This work focuses on learning deep visual representation models for retrieval by exploring the interplay between a new loss function, the batch size, and a new regularization approach. The suggested method achieves state-of-the-art performance in several image retrieval benchmarks when used for deep metric learning.
This work focuses on learning deep visual representation models for retrieval by exploring the interplay between a new loss function, the batch size, and a new regularization approach. Direct optimization, by gradient descent, of an evaluation metric, is not possible when it is non-differentiable, which is the case for recall in retrieval. A differentiable surrogate loss for the recall is proposed in this work. Using an implementation that sidesteps the hardware constraints of the GPU memory, the method trains with a very large batch size, which is essential for metrics computed on the entire retrieval database. It is assisted by an efficient mixup regularization approach that operates on pairwise scalar similarities and virtually increases the batch size further. The suggested method achieves state-of-the-art performance in several image retrieval benchmarks when used for deep metric learning. For instance-level recognition, the method outperforms similar approaches that train using an approximation of average precision.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据