4.7 Article Proceedings Paper

Communication-efficient distributed multi-task learning with matrix sparsity regularization

期刊

MACHINE LEARNING
卷 109, 期 3, 页码 569-601

出版社

SPRINGER
DOI: 10.1007/s10994-019-05847-6

关键词

Distributed learning; Multi-task learning; Acceleration

资金

  1. NTU Singapore Nanyang Assistant Professorship (NAP) [M4081532.020]
  2. Singapore MOE AcRF Tier-2 Grant [MOE2016-T2-2-060]

向作者/读者索取更多资源

This work focuses on distributed optimization for multi-task learning with matrix sparsity regularization. We propose a fast communication-efficient distributed optimization method for solving the problem. With the proposed method, training data of different tasks can be geo-distributed over different local machines, and the tasks can be learned jointly through the matrix sparsity regularization without a need to centralize the data. We theoretically prove that our proposed method enjoys a fast convergence rate for different types of loss functions in the distributed environment. To further reduce the communication cost during the distributed optimization procedure, we propose a data screening approach to safely filter inactive features or variables. Finally, we conduct extensive experiments on both synthetic and real-world datasets to demonstrate the effectiveness of our proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据