4.7 Article

UTFNet: Uncertainty-Guided Trustworthy Fusion Network for RGB-Thermal Semantic Segmentation

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2023.3322452

关键词

Dempster-Shafer theory (DST); RGB and thermal (RGB-T) semantic segmentation; trustworthy fusion; uncertainty estimation

向作者/读者索取更多资源

In this study, a novel uncertainty-guided trustworthy fusion network (UTFNet) is proposed for RGB-T semantic segmentation. The uncertainty of each modality is estimated and used to guide the information fusion, resulting in improved accuracy, robustness, and trustworthiness of the segmentation model.
In real-world scenarios, the information quality provided by RGB and thermal (RGB-T) sensors often varies across samples. This variation will negatively impact the performance of semantic segmentation models in utilizing complementary information from RGB-T modalities, resulting in a decrease in accuracy and fusion credibility. Dynamically estimating the uncertainty of each modality for different samples could help the model perceive such information quality variation and then provide guidance for a reliable fusion. With this in mind, we propose a novel uncertainty-guided trustworthy fusion network (UTFNet) for RGB-T semantic segmentation. Specifically, we design an uncertainty estimation and evidential fusion (UEEF) module to quantify the uncertainty of each modality and then utilize the uncertainty to guide the information fusion. In the UEEF module, we introduce the Dirichlet distribution to model the distribution of the predicted probabilities, parameterized with evidence from each modality and then integrate them with the Dempster-Shafer theory (DST). Moreover, illumination evidence gathering (IEG) and multiscale evidence gathering (MEG) modules by considering illumination and target multiscale information, respectively, are designed to gather more reliable evidence. In the IEG module, we calculate the illumination probability and model it as the illumination evidence. The MEG module can collect evidence for each modality across multiple scales. Both qualitative and quantitative results demonstrate the effectiveness of our proposed model in accuracy, robustness, and trustworthiness. The code will be accessible at https://github.com/KustTeamWQW/UTFNet.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据