3.8 Proceedings Paper

Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks

Journal

Publisher

IEEE
DOI: 10.1109/DAC18074.2021.9586187

Keywords

Neural Architecture Search; Temporal Convolutional Networks; Edge Computing; Deep Learning

Ask authors/readers for more resources

The study introduced an automatic dilation optimizer that learns dilation factors and weights on the time axis through weight pruning, reducing model size and latency on hardware targets without compromising accuracy. This method generates a set of Pareto-optimal TCNs, outperforming hand-designed solutions in both size and accuracy.
Temporal Convolutional Networks (TCNs) are promising Deep Learning models for time-series processing tasks. One key feature of TCNs is time-dilated convolution, whose optimization requires extensive experimentation. We propose an automatic dilation optimizer, which tackles the problem as a weight pruning on the time-axis, and learns dilation factors together with weights, in a single training. Our method reduces the model size and inference latency on a real SoC hardware target by up to 7.4 x and 3 x, respectively with no accuracy drop compared to a network without dilation. It also yields a rich set of Pareto-optimal TCNs starting from a single model, outperforming hand-designed solutions in both size and accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available