3.8 Proceedings Paper

Drone segmentation and orientation detection using a SPAD array camera

期刊

出版社

SPIE-INT SOC OPTICAL ENGINEERING
DOI: 10.1117/12.2618748

关键词

Drone pose; Lidar; SPAD; CNN

类别

资金

  1. Defence Science Technologies Laboratory [Dstlx-1000147352, Dstlx-1000147844]
  2. EPSRC [EP/T00097X/1, EP/S026428/1]

向作者/读者索取更多资源

The development of single-photon avalanche diode (SPADs) arrays in time-of-flight imaging systems has enabled the application of 3D imaging for drone identification, orientation, and segmentation. By combining the imaging capability of SPAD sensors with the classification capabilities of convolutional neural networks, an accurate determination of drone pose in flight can be achieved, with prediction accuracy of over 90% after training.
The recent development of single-photon avalanche diode (SPADs) arrays as imaging sensors with both picosecond binning capabilities and single photon sensitivity has led to the rapid development of time-of-flight imaging systems. When used in conjunction with a synchronised light source these sensors produce a 3D image. Here, we apply this 3D imaging ability to the problem of drone identification, orientation, and, segmentation. The proliferation of semi-autonomous aerial multi-copters i.e. drones, has raised concerns over the ability of existing aerial detection systems to accurately characterise such vehicles. Here, we fuse the 3D imaging of SPAD sensors with the classification capabilities of a bespoke convolutional neural network (CNN) into a system capable of determining drone pose in flight. To overcome the lack of publicly available training data we generate a photo-realistic dataset to enable the training of our network. After training, we are able to predict the roll, pitch, and yaw of the several different drone types with an accuracy greater than 90%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据