期刊
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
卷 56, 期 1, 页码 547-558出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2017.2751461
关键词
Buried object detection; feature extraction; ground-penetrating radar; image classification; object detection; radar imaging
类别
资金
- U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate through Army Research Office [W909MY-11-R-0001]
Forward-looking ground-penetrating radar (FLGPR) has recently been investigated as a remote-sensing modality for buried target detection (e.g., landmines). In this context, raw FLGPR data are beamformed into images, and then, computerized algorithms are applied to automatically detect subsurface buried targets. Most existing algorithms are supervised, meaning that they are trained to discriminate between labeled target and nontarget imagery, usually based on features extracted from the imagery. A large number of features have been proposed for this purpose; however, thus far it is unclear as to which are the most effective. The first goal of this paper is to provide a comprehensive comparison of detection performance using existing features on a large collection of FLGPR data. Fusion of the decisions resulting from processing each feature is also considered. The second goal of this paper is to investigate two modern feature learning approaches from the object recognition literature: the bag-of-visual words and the Fisher vector for FLGPR processing. The results indicate that the new feature learning approaches lead to the best performing FLGPR algorithm. The results also show that fusion between existing features and new features yields no additional performance improvements.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据