3.8 Proceedings Paper

Attention on Classification for Fire Segmentation

出版社

IEEE
DOI: 10.1109/ICMLA52953.2021.00103

关键词

fire detection; semantic segmentation; deep convolutional neural network; multitask learning

资金

  1. VOAMAIS [PTDC/EEI-AUT/31172/2017, 02/SAICT/2017/31172]
  2. FIREFRONT [PCIF/SSI/0096/2017]
  3. LARSyS -FCT Project [UIDB/50009/2020]
  4. Fundação para a Ciência e a Tecnologia [PTDC/EEI-AUT/31172/2017] Funding Source: FCT

向作者/读者索取更多资源

The study introduced a Convolutional Neural Network (CNN) for joint classification and segmentation of fire in images, utilizing spatial self-attention and channel attention to improve fire segmentation performance.
Detection and localization of fire in images and videos are important in tackling fire incidents. Although semantic segmentation methods can be used to indicate the location of pixels with fire in the images, their predictions are localized, and they often fail to consider global information of the existence of fire in the image which is implicit in the image labels. We propose a Convolutional Neural Network (CNN) for joint classification and segmentation of fire in images which improves the performance of the fire segmentation. We use a spatial self-attention mechanism to capture long-range dependency between pixels, and a new channel attention module which uses the classification probability as an attention weight. The network is jointly trained for both segmentation and classification, leading to improvement in the performance of the single-task image segmentation methods, and the previous methods proposed for fire segmentation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据