期刊
SIAM JOURNAL ON COMPUTING
卷 45, 期 3, 页码 763-810出版社
SIAM PUBLICATIONS
DOI: 10.1137/140963698
关键词
randomized algorithms; subspace embedding; robust regression; Cauchy transform
资金
- NSF [IIS-1447283, IIS-1302231]
- ARO
- DARPA
We provide fast algorithms for overconstrained l(p) regression and related problems: for an n x d input matrix A and vector b is an element of R-n, in O(nd log n) time we reduce the problem min(x is an element of Rd) parallel to Ax - b parallel to(p) to the same problem with input matrix (A) over tilde of dimension s x d and corresponding (b) over tilde of dimension s x 1. Here, (A) over tilde and (b) over tilde are a coreset for the problem, consisting of sampled and rescaled rows of A and b; and s is independent of n and polynomial in d. Our results improve on the best previous algorithms when n >> d for all p is an element of [1,infinity) except p = 2; in particular, they improve the O(nd(1.376+)) running time of Sohler and Woodruff [ Proceedings of the 43rd Annual ACM Symposium on Theory of Computing, 2011, pp. 755-764] for p = 1, which uses asymptotically fast matrix multiplication, and the O(nd(5) log n) time of Dasgupta et al. [SIAM J. Comput., 38 (2009), pp. 2060-2078] for general p, which uses ellipsoidal rounding. We also provide a suite of improved results for finding well-conditioned bases via ellipsoidal rounding, illustrating tradeoffs between running time and conditioning quality, including a one-pass conditioning algorithm for general l(p) problems. To complement this theory, we provide a detailed empirical evaluation of implementations of our algorithms for p = 1, comparing them with several related algorithms. Among other things, our empirical results clearly show that, in the asymptotic regime, the theory is a very good guide to the practical performance of these algorithms. Our algorithms use our faster constructions of well-conditioned bases for l(p) spaces and, for p = 1, a fast subspace embedding of independent interest that we call the Fast Cauchy transform: a distribution over matrices Pi : R-n bar right arrow R-O(d log d), found obliviously to A, that approximately preserves the l(1) norms, that is, with large probability, simultaneously for all x, parallel to Ax parallel to(1) approximate to parallel to Pi Ax parallel to(1), with distortion O(d(2+n)), for an arbitrarily small constant eta > 0; and, moreover, Pi A can be computed in O(nd log d) time. The techniques underlying our Fast Cauchy transform include Fast Johnson-Lindenstrauss transforms, low-coherence matrices, and rescaling by Cauchy random variables.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据