期刊
IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 31, 期 -, 页码 6139-6151出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2022.3205770
关键词
Feature extraction; Distortion; Training; Task analysis; Distortion measurement; Predictive models; Image quality; Image quality assessment; no-reference; mutual learning; pseudo-reference feature
资金
- National Natural Science Foundation of China [62022002]
- Shenzhen Virtual University Park
- Science Technology and Innovation Committee of Shenzhen Municipality [2021Szvup128]
- Hong Kong Research Grants Council General Research Fund (GRF) [11203220]
In this paper, a no-reference image quality assessment method based on feature level pseudo-reference hallucination is proposed. The method utilizes perceptually meaningful features to characterize visual quality and leverages natural image statistical behaviors for accurate predictions. Experimental results demonstrate the effectiveness and high generalization capability of the proposed method on multiple databases.
In this paper, we propose a no-reference (NR) image quality assessment (IQA) method via feature level pseudo-reference (PR) hallucination. The proposed quality assessment framework is rooted in the view that the perceptually meaningful features could be well exploited to characterize the visual quality, and the natural image statistical behaviors are exploited in an effort to deliver the accurate predictions. Herein, the PR features from the distorted images are learned by a mutual learning scheme with the pristine reference as the supervision, and the discriminative characteristics of PR features are further ensured with the triplet constraints. Given a distorted image for quality inference, the feature level disentanglement is performed with an invertible neural layer for final quality prediction, leading to the PR and the corresponding distortion features for comparison. The effectiveness of our proposed method is demonstrated on four popular IQA databases, and superior performance on cross-database evaluation also reveals the high generalization capability of our method. The implementation of our method is publicly available on https://github.com/Baoliang93/FPR.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据