4.5 Article

Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization

Journal

NEURAL COMPUTATION
Volume 26, Issue 8, Pages 1717-1762

Publisher

MIT PRESS
DOI: 10.1162/NECO_a_00614

Keywords

-

Funding

  1. MEXT
  2. JST PRESTO program
  3. MEXT KAKENHI [23120004]
  4. FIRST program
  5. Grants-in-Aid for Scientific Research [26280054] Funding Source: KAKEN

Ask authors/readers for more resources

We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available