4.5 Article

Exploration of block-wise dynamic sparseness

期刊

PATTERN RECOGNITION LETTERS
卷 151, 期 -, 页码 187-192

出版社

ELSEVIER
DOI: 10.1016/j.patrec.2021.08.013

关键词

Neural network; Dynamic sparseness; Block-wise matrix multiplication

向作者/读者索取更多资源

This paper introduces a new method for dynamic sparseness that combines sparsity with block-wise matrix-vector multiplications to improve efficiency. Unlike static sparseness, this method preserves the full network capabilities and outperforms static sparseness baselines in the task of language modeling.
Neural networks have achieved state of the art performance across a wide variety of machine learning tasks, often with large and computation-heavy models. Inducing sparseness as a way to reduce the memory and computation footprint of these models has seen significant research attention in recent years. In this paper, we present a new method for dynamic sparseness , whereby part of the computations are omitted dynamically, based on the input. For efficiency, we combined the idea of dynamic sparseness with block-wise matrix-vector multiplications. In contrast to static sparseness, which permanently zeroes out selected positions in weight matrices, our method preserves the full network capabilities by potentially accessing any trained weights. Yet, matrix vector multiplications are accelerated by omitting a pre-defined fraction of weight blocks from the matrix, based on the input. Experimental results on the task of language modeling, using recurrent and quasi-recurrent models, show that the proposed method can outperform static sparseness baselines. In addition, our method can reach similar language modeling perplexities as the dense baseline, at half the computational cost at inference time. (c) 2021 Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据