4.6 Article

CoHOG: A Light-Weight, Compute-Efficient, and Training-Free Visual Place Recognition Technique for Changing Environments

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 5, Issue 2, Pages 1835-1842

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2020.2969917

Keywords

SLAM; visual place recognition; autonomous vehicle navigation; computer vision for automation

Categories

Funding

  1. UK Engineering and Physical Sciences Research Council [EP/R02572X/1, EP/P017487/1]
  2. National Centre for Nuclear Robotics Flexible Partnership Fund
  3. EPSRC [EP/R02572X/1, EP/P017487/1] Funding Source: UKRI

Ask authors/readers for more resources

This letter presents a novel, compute-efficient and training-free approach based on Histogram-of-Oriented-Gradients (HOG) descriptor for achieving state-of-the-art performance-per-compute-unit in Visual Place Recognition (VPR). The inspiration for this approach (namely CoHOG) is based on the convolutional scanning and regions-based feature extraction employed by Convolutional Neural Networks (CNNs). By using image entropy to extract regions-of-interest (ROI) and regional-convolutional descriptor matching, our technique performs successful place recognition in changing environments. We use viewpoint- and appearance-variant public VPR datasets to report this matching performance, at lower RAM commitment, zero training requirements and 20 times lesser feature encoding time compared to state-of-the-art neural networks. We also discuss the image retrieval time of CoHOG and the effect of CoHOG's parametric variation on its place matching performance and encoding time.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available