4.5 Article

Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization

Journal

IEEE TRANSACTIONS ON INFORMATION THEORY
Volume 56, Issue 11, Pages 5847-5861

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIT.2010.2068870

Keywords

Convex optimization; density ratio estimation; divergence estimation; Kullback-Leibler (KL) divergence; f-divergence; M-estimation; reproducing kernel Hilbert space (RKHS); surrogate loss functions

Funding

  1. NSF [DMS-0605165, CCF-0545862, 0509559]
  2. Direct For Computer & Info Scie & Enginr
  3. Division Of Computer and Network Systems [0509559] Funding Source: National Science Foundation

Ask authors/readers for more resources

We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a nonasymptotic variational characterization of f-divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available