4.7 Article

Visual prior-based cross-modal alignment network for radiology report generation

Journal

COMPUTERS IN BIOLOGY AND MEDICINE
Volume 166, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2023.107522

Keywords

Radiology report generation; Visual prior; Contrastive attention; Cross-modal alignment; Multi-head attention

Ask authors/readers for more resources

Automated radiology report generation is a popular method for alleviating the workload of radiologists and reducing misdiagnosis. However, existing approaches lack visual prior and alignment between images and texts. To address these issues, this study proposes a Visual Prior-based Cross-modal Alignment Network, which uses contrastive attention to extract visual prior and a cross-modal alignment network to align images and texts. Experimental results on benchmark datasets demonstrate that the proposed model outperforms state-of-the-art models in terms of various metrics.
Automated radiology report generation is gaining popularity as a means to alleviate the workload of radiologists and prevent misdiagnosis and missed diagnoses. By imitating the working patterns of radiologists, previous report generation approaches have achieved remarkable performance. However, these approaches suffer from two significant problems: (1) lack of visual prior: medical observations in radiology images are interdependent and exhibit certain patterns, and lack of such visual prior can result in reduced accuracy in identifying abnormal regions; (2) lack of alignment between images and texts: the absence of annotations and alignments for regions of interest in the radiology images and reports can lead to inconsistent visual and textual features of the abnormal regions generated by the model. To address these issues, we propose a Visual Prior -based Cross-modal Alignment Network for radiology report generation. First, we propose a novel Contrastive Attention that compares input image with normal images to extract difference information, namely visual prior, which helps to identify abnormalities quickly. Then, to facilitate the alignment of images and texts, we propose a Cross-modal Alignment Network that leverages the cross-modal matrix initialized by the features generated by pre-trained models, to compute cross-modal responses for visual and textual features. Finally, a Visual Prior-guided Multi-Head Attention is proposed to incorporate the visual prior into the generation process. The extensive experimental results on two benchmark datasets, IU-Xray and MIMIC-CXR, illustrate that our proposed model outperforms the state-of-the-art models over almost all metrics, achieving BLEU-4 scores of 0.188 and 0.116 and CIDEr scores of 0.409 and 0.240, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available