期刊
ELECTRONIC JOURNAL OF STATISTICS
卷 2, 期 -, 页码 494-515出版社
INST MATHEMATICAL STATISTICS
DOI: 10.1214/08-EJS176
关键词
Covariance matrix; High dimension low sample size; large p small n; Lasso; Sparsity; Cholesky decomposition
资金
- NSF [DMS-0605236, DMS-0505424, DMS-0805798, DMS-0505432, DMS-0705532]
- NSA [MSPF-04Y-120]
The paper proposes a method for constructing a sparse estimator for the inverse covariance (concentration) matrix in high-dimensional settings. The estimator uses a penalized normal likelihood approach and forces sparsity by using a lasso-type penalty. We establish a rate of convergence in the Frobenius norm as both data dimension p and sample size n are allowed to grow, and show that the rate depends explicitly on how sparse the true concentration matrix is. We also show that a correlation-based version of the method exhibits better rates in the operator norm. We also derive a fast iterative algorithm for computing the estimator, which relies on the popular Cholesky decomposition of the inverse but produces a permutation-invariant estimator. The method is compared to other estimators on simulated data and on a real data example of tumor tissue classification using gene expression data.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据