4.7 Article

Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent (SGD)

期刊

MACHINE LEARNING
卷 99, 期 3, 页码 353-372

出版社

SPRINGER
DOI: 10.1007/s10994-014-5456-x

关键词

-

资金

  1. NSF [IIS-1251031]
  2. ONR [N000141410631]
  3. Div Of Information & Intelligent Systems
  4. Direct For Computer & Info Scie & Enginr [1251031] Funding Source: National Science Foundation

向作者/读者索取更多资源

Distance metric learning (DML) is an important task that has found applications in many domains. The high computational cost of DML arises from the large number of variables to be determined and the constraint that a distance metric has to be a positive semi-definite (PSD) matrix. Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive in order to ensure that the solution is a PSD matrix. It has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation. We address this challenge by developing two strategies within SGD, i.e. mini-batch and adaptive sampling, to effectively reduce the number of updates (i.e. projections onto the PSD cone) in SGD. We also develop hybrid approaches that combine the strength of adaptive sampling with that of mini-batch online learning techniques to further improve the computational efficiency of SGD for DML. We prove the theoretical guarantees for both adaptive sampling and mini-batch based approaches for DML. We also conduct an extensive empirical study to verify the effectiveness of the proposed algorithms for DML.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据