4.7 Article

An Unsupervised Convolutional Feature Fusion Network for Deep Representation of Remote Sensing Images

Journal

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Volume 15, Issue 1, Pages 23-27

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2017.2767626

Keywords

Convolutional neural network (CNN); feature fusion network; sparsity; unsupervised deep learning

Funding

  1. Natural Science Foundation of China [61671456, 61271439]
  2. Foundation for the Author of National Excellent Doctoral Dissertation of China [201243]
  3. Program for New Century Excellent Talents in University [NECT-13-0164]

Ask authors/readers for more resources

Unsupervised learning of a convolutional neural network (CNN) is a feasible method to represent and classify remote sensing images, where labeling the observed data to prepare training samples is a highly expensive and time-consuming task. In this letter, we propose an unsupervised convolutional feature fusion network to formulate an easy-to-train but effective CNN representation of remote sensing images. The efficiency and effectiveness are derived from the following two aspects. First, the proposed method trains a deep CNN through unsupervised learning of each CNN layer in a greedy layer-wise manner, which makes the training relatively easy and efficient. Second, the feature fusion strategy in the proposed network can effectively use both the information from individual layers and the important interactions between different layers. As a result, the proposed network requires only several layers to obtain comparable or even better results than very deep networks. The experiments on unsupervised deep representations and the classification of remote sensing images demonstrate the efficiency and effectiveness of the proposed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available