4.6 Article

Deep object recognition across domains based on adaptive extreme learning machine

Journal

NEUROCOMPUTING
Volume 239, Issue -, Pages 194-203

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2017.02.016

Keywords

Deep learning; Image classification; Support vector machine; Extreme learning machine; Object recognition

Funding

  1. National Natural Science Foundation of China [61401048]

Ask authors/readers for more resources

Deep learning with a convolutional neural network (CNN) has been proved to be very effective in feature extraction and representation of images. For image classification problems, this work aims at exploring the capability of extreme learning machine on high-level deep features of images. Additionally, motivated by the biological learning mechanism of ELM, in this paper, an adaptive extreme learning machine (AELM) method is proposed for handling cross-task (domain) learning problems, without loss of its nature of randomization and high efficiency. The proposed AELM is an extension of ELM from single task to cross task learning, by introducing a new error term and Laplacian graph based manifold regularization term in objective function. We have discussed the nearest neighbor, support vector machines and extreme learning machines for image classification under deep convolutional activation feature representation. Specifically, we adopt 4 benchmark object recognition datasets from multiple sources with domain bias for evaluating different classifiers. The deep features of the object dataset are obtained by a well-trained CNN with five convolutional layers and three fully-connected layers on ImageNet. Experiments demonstrate that the proposed AELM is comparable and effective in single and multiple domains based recognition tasks. (C) 2017 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available