Journal
SIGNAL PROCESSING-IMAGE COMMUNICATION
Volume 83, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.image.2020.115782
Keywords
Video quality; Feature extraction; VGG-net; VQA; Average pooling; Human perception
Categories
Funding
- National Key RD Plan [2018YFB0605504]
- Fundamental Research Funds for the Central Universities [2019MS024]
Ask authors/readers for more resources
The standard no-reference video quality assessment (NR-VQA) is designed for a specific type of distortion. It quantifies the visual quality of a distorted video without the reference one. Practically, there is a deviation between the result of NR-VQA and human subjective perception. To tackle this problem, we propose a 3D deep convolutional neural network (3D CNN) to evaluate video quality without reference by generating spatial/temporal deep features within different video clips 3D CNN is designed by collaboratively and seamlessly integrating the features output from VGG-Net on video frames. To prevent our adopted VGG-Net from overfitting, the parameters are transferred from the deep architecture learned from the ImageNet dataset. Extensive IQA/VQA experimental results based on the LIVE, TID, and the CSIQ video quality databases have demonstrated that the proposed IQA/VQA model performs competitively the conventional methods.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available