4.6 Article

PARALLEL AND COMMUNICATION AVOIDING LEAST ANGLE REGRESSION

期刊

SIAM JOURNAL ON SCIENTIFIC COMPUTING
卷 43, 期 2, 页码 C154-C176

出版社

SIAM PUBLICATIONS
DOI: 10.1137/19M1305720

关键词

communication avoiding; least angle regression; parallel methods

向作者/读者索取更多资源

This study focuses on parallelizing the LARS algorithm for linear regression with two different versions: bLARS and tournament-bLARS (T-bLARS). These algorithms have different speedup and output accuracy, reducing arithmetic operations and improving efficiency. Numerous numerical experiments demonstrate up to 4x speedups compared to the traditional LARS algorithm without sacrificing solution quality.
We are interested in parallelizing the least angle regression (LARS) algorithm for fitting linear regression models to high-dimensional data. We consider two parallel and communication avoiding versions of the basic LARS algorithm. The two algorithms have different asymptotic costs and practical performance. One offers more speedup and the other produces more accurate output. The first is bLARS, a block version of the LARS algorithm, where we update b columns at each iteration. Assuming that the data are row-partitioned, bLARS reduces the number of arithmetic operations, latency, and bandwidth by a factor of b. The second is tournament-bLARS (T-bLARS), a tournament version of LARS where processors compete by running several LARS computations in parallel to choose b new columns to be added in the solution. Assuming that the data are column-partitioned, T-bLARS reduces latency by a factor of b. Similarly to LARS, our proposed methods generate a sequence of linear models. We present extensive numerical experiments that illustrate speedups up to 4x compared to LARS without any compromise in solution quality.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据