4.6 Article

CONVEX REGULARIZATION FOR HIGH-DIMENSIONAL MULTIRESPONSE TENSOR REGRESSION

Journal

ANNALS OF STATISTICS
Volume 47, Issue 3, Pages 1554-1584

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/18-AOS1725

Keywords

Tensor regression; convex regularization; Gaussian width; intrinsic dimension; low-rank; sparsity

Funding

  1. NSF Grant [DMS-1407028]
  2. NSF FRG Grant [DMS-1265202]
  3. NIH Grant [1-U54AI117924-01]

Ask authors/readers for more resources

In this paper, we present a general convex optimization approach for solving high-dimensional multiple response tensor regression problems under low-dimensional structural assumptions. We consider using convex and weakly decomposable regularizers assuming that the underlying tensor lies in an unknown low-dimensional subspace. Within our framework, we derive general risk bounds of the resulting estimate under fairly general dependence structure among covariates. Our framework leads to upper bounds in terms of two very simple quantities, the Gaussian width of a convex set in tensor space and the intrinsic dimension of the low-dimensional tensor subspace. To the best of our knowledge, this is the first general framework that applies to multiple response problems. These general bounds provide useful upper bounds on rates of convergence for a number of fundamental statistical models of interest including multiresponse regression, vector autoregressive models, low-rank tensor models and pairwise interaction models. Moreover, in many of these settings we prove that the resulting estimates are minimax optimal. We also provide a numerical study that both validates our theoretical guarantees and demonstrates the breadth of our framework.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available