4.5 Article

No-reference video quality evaluation by a deep transfer CNN architecture

Journal

SIGNAL PROCESSING-IMAGE COMMUNICATION
Volume 83, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.image.2020.115782

Keywords

Video quality; Feature extraction; VGG-net; VQA; Average pooling; Human perception

Funding

  1. National Key RD Plan [2018YFB0605504]
  2. Fundamental Research Funds for the Central Universities [2019MS024]

Ask authors/readers for more resources

The standard no-reference video quality assessment (NR-VQA) is designed for a specific type of distortion. It quantifies the visual quality of a distorted video without the reference one. Practically, there is a deviation between the result of NR-VQA and human subjective perception. To tackle this problem, we propose a 3D deep convolutional neural network (3D CNN) to evaluate video quality without reference by generating spatial/temporal deep features within different video clips 3D CNN is designed by collaboratively and seamlessly integrating the features output from VGG-Net on video frames. To prevent our adopted VGG-Net from overfitting, the parameters are transferred from the deep architecture learned from the ImageNet dataset. Extensive IQA/VQA experimental results based on the LIVE, TID, and the CSIQ video quality databases have demonstrated that the proposed IQA/VQA model performs competitively the conventional methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available