4.7 Article

Deep Hashing Learning for Visual and Semantic Retrieval of Remote Sensing Images

Journal

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Volume 59, Issue 11, Pages 9661-9672

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.3035676

Keywords

Feature extraction; Semantics; Remote sensing; Image retrieval; Visualization; Sensors; Deep learning; Classification; deep learning; hashing learning; remote sensing; retrieval

Funding

  1. National Natural Science Fund of China [61890962, 61520106001]
  2. Science and Technology Plan Project Fund of Hunan Province [CX2018B171, 2017RS3024, 2018TP1013]
  3. Science and Technology Talents Program of the Hunan Association for Science and Technology [2017TJ-Q09]

Ask authors/readers for more resources

The article introduces a novel deep hashing convolutional neural network (DHCNN) for simultaneous image retrieval and classification, achieving state-of-art performance in both aspects.
Driven by the urgent demand for managing remote sensing big data, large-scale remote sensing image retrieval (RSIR) attracts increasing attention in the remote sensing field. In general, existing retrieval methods can be regarded as visual-based retrieval approaches that search and return a set of similar images to a given query image from a database. Although these retrieval methods have delivered good results, there is still a question that needs to be addressed: can we obtain the accurate semantic labels of the returned similar images to further help analyzing and processing imagery? To this end, in this article, we redefine the image retrieval problem as visual and semantic retrieval of images. Especially, we propose a novel deep hashing convolutional neural network (DHCNN) to retrieve similar images and classify their semantic labels simultaneously in a unified framework. In more detail, a convolutional neural network (CNN) is used to extract high-dimensional deep features. Then, a hash layer is perfectly inserted into the network to transfer the deep features into compact hash codes. In addition, a fully connected layer with a softmax function is performed on the hash layer to generate the probability distribution of each class. Finally, a loss function is elaborately designed to consider the label loss of each image and similarity loss of pairs of images simultaneously. Experimental results on three remote sensing data sets demonstrate that the proposed method can achieve state-of-art retrieval and classification performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available