3.9 Article

Gas Detection and Identification Using Multimodal Artificial Intelligence Based Sensor Fusion

期刊

APPLIED SYSTEM INNOVATION
卷 4, 期 1, 页码 -

出版社

MDPI
DOI: 10.3390/asi4010003

关键词

convolutional neural network; early fusion; gas detection; long-short term memory; multimodal data

资金

  1. Symbiosis International (Deemed University) [SIU/SCRI/MRPAPPROVAL//2018/1769]

向作者/读者索取更多资源

With the rapid industrialization and technological advancements, innovative engineering technologies are essential, and sensor fusion is necessary for robust and reliable detection in several real-world applications. Our proposed novel approach utilizing multimodal AI fusion techniques outperformed single sensor detection, with a testing accuracy of 96% compared to 82% and 93% for individual models based on Gas Sensor data and thermal images data, respectively.
With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.9
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据