4.8 Article

Large-Scale Nonlinear AUC Maximization via Triply Stochastic Gradients

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.3024987

Keywords

AUC maximization; random fourier features; kernel methods

Funding

  1. National Key R&D Program of China [2017YFE0104100, 2016YFE0200400]

Ask authors/readers for more resources

In this paper, we propose a novel large-scale nonlinear AUC maximization method, TSAM, which combines random Fourier feature approximation and triply stochastic gradient descents. The experimental results demonstrate the scalability and computational efficiency of TSAM, while maintaining good generalization performance.
Learning to improve AUC performance for imbalanced data is an important machine learning research problem. Most methods of AUC maximization assume that the model function is linear in the original feature space. However, this assumption is not suitable for nonlinear separable problems. Although there have been some nonlinear methods of AUC maximization, scaling up nonlinear AUC maximization is still an open question. To address this challenging problem, in this paper, we propose a novel large-scale nonlinear AUC maximization method (named as TSAM) based on the triply stochastic gradient descents. Specifically, we first use the random Fourier feature to approximate the kernel function. After that, we use the triply stochastic gradients w.r. t the pairwise loss and random feature to iteratively update the solution. Finally, we prove that TSAM converges to the optimal solution with the rate of O(1/t) after t iterations. Experimental results on a variety of benchmark datasets not only confirm the scalability of TSAM. but also show a significant reduction of computational time compared with existing batch learning algorithms, while retaining the similar generalization performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available