4.8 Article

Optimizing Partial Area Under the Top-k Curve: Theory and Practice

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3199970

Keywords

Measurement; Semantics; Benchmark testing; Training; Loss measurement; Fasteners; Upper bound; Machine learning; label ambiguity; Top-k error; AUTKC optimization

Ask authors/readers for more resources

Existing literature on top-$k$k optimization mainly focuses on the optimization method of the top-$k$k objective, but neglects the limitations of the metric itself. To address this issue, a novel metric named partial Area Under the top-$k$k Curve (AUTKC) is proposed, which has better discrimination ability and does not allow irrelevant labels to appear in the top list. Experimental results on benchmark datasets validate the effectiveness of the proposed framework.
Top-$k$k error has become a popular metric for large-scale classification benchmarks due to the inevitable semantic ambiguity among classes. Existing literature on top-$k$k optimization generally focuses on the optimization method of the top-$k$k objective, while ignoring the limitations of the metric itself. In this paper, we point out that the top-$k$k objective lacks enough discrimination such that the induced predictions may give a totally irrelevant label a top rank. To fix this issue, we develop a novel metric named partial Area Under the top-$k$k Curve (AUTKC). Theoretical analysis shows that AUTKC has a better discrimination ability, and its Bayes optimal score function could give a correct top-$K$K ranking with respect to the conditional probability. This shows that AUTKC does not allow irrelevant labels to appear in the top list. Furthermore, we present an empirical surrogate risk minimization framework to optimize the proposed metric. Theoretically, we present (1) a sufficient condition for Fisher consistency of the Bayes optimal score function; (2) a generalization upper bound which is insensitive to the number of classes under a simple hyperparameter setting. Finally, the experimental results on four benchmark datasets validate the effectiveness of our proposed framework.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available