4.7 Article

Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network-Based Vector-to-Vector Regression

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 68, Issue -, Pages 3411-3422

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2020.2993164

Keywords

Upper bound; Speech enhancement; Neural networks; Optimization; Complexity theory; Signal processing algorithms; Estimation error; Deep neural network; mean absolute error; vector-to-vector regression; non-convex optimization; image de-noising; speech enhancement

Funding

  1. National Natural Science Foundation of China [61671422]

Ask authors/readers for more resources

In this paper, we show that, in vector-to-vector regression utilizing deep neural networks (DNNs), a generalized loss of mean absolute error (MAE) between the predicted and expected feature vectors is upper bounded by the sum of an approximation error, an estimation error, and an optimization error. Leveraging upon error decomposition techniques in statistical learning theory and non-convex optimization theory, we derive upper bounds for each of the three aforementioned errors and impose necessary constraints on DNN models. Moreover, we assess our theoretical results through a set of image de-noising and speech enhancement experiments. Our proposed upper bounds of MAE for DNN based vector-to-vector regression are corroborated by the experimental results and the upper bounds are valid with and without the over-parametrization technique.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available