4.6 Article

O-Net: Dangerous Goods Detection in Aviation Security Based on U-Net

Journal

IEEE ACCESS
Volume 8, Issue -, Pages 206289-206302

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3037719

Keywords

X-ray imaging; Feature extraction; Image segmentation; Search problems; Convolution; Image recognition; Explosives; Artificial intelligence security system; aviation security; detection algorithm; image segmentation; U-Net; X-ray detection

Funding

  1. National Research Foundation of Korea (NRF) - Korea Government (Ministry of Science and ICT) [NRF-2020R1F1A1076812]

Ask authors/readers for more resources

Aviation security X-ray equipment currently searches objects through primary screening, in which the screener has to re-search a baggage/person to detect the target object from overlapping objects. The advancements of computer vision and deep learning technology can be applied to improve the accuracy of identifying the most dangerous goods, guns and knives, from X-ray images of baggage. Artificial intelligence-based aviation security X-rays can facilitate the high-speed detection of target objects while reducing the overall security search duration and load on the screener. Moreover, the overlapping phenomenon was improved by using raw RGB images from X-rays and simultaneously converting the images into grayscale for input. An O-Net structure was designed through various learning rates and dense/depth-wise experiments as an improvement based on U-Net. Two encoders and two decoders were used to incorporate various types of images in processing and maximize the output performance of the neural network, respectively. In addition, we proposed U-Net segmentation to detect target objects more clearly than the You Only Look Once (YOLO) of Bounding-box (Bbox) type through the concept of a confidence score. Consequently, the comparative analysis of basic segmentation models such as Fully Convolutional Networks (FCN), U-Net, and Segmentation-networks (SegNet) based on the major performance indicators of segmentation-pixel accuracy and mean-intersection over union (m-IoU)-revealed that O-Net improved the average pixel accuracy by 5.8%, 2.26%, and 5.01% and the m-IoU was improved by 43.1%, 9.84%, and 23.31%, respectively. Moreover, the accuracy of O-Net was 6.56% higher than that of U-Net, indicating the superiority of the O-Net architecture.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available