4.6 Article

Feature learning for stacked ELM via low-rank matrix factorization

Journal

NEUROCOMPUTING
Volume 448, Issue -, Pages 82-93

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.03.110

Keywords

Feature learning; Low-rank matrix factorization; Hidden feature layer; ELM-AE

Ask authors/readers for more resources

A new improved ELM-AE architecture is proposed in this paper, utilizing low-rank matrix factorization to learn optimal low-dimensional features. This allows for arbitrary setting of the hidden layer dimension and enhances features' non-linear ability.
Extreme-learning-machine based auto-encoder (ELM-AE) is regarded as a useful architecture with fast learning speed and general approximation ability, and stacked ELM is used to develop efficient and effective deep learning networks. However, considering features learned from conventional ELM-AEs have issues of weak nonlinear representation ability and random factors in feature projection, this paper proposes an improved ELM-AE architecture which utilize low-rank matrix factorization to learn optimal lowdimensional features. Two superiorities can be obtained compared to conventional ELM-AEs. One is the dimensionality of the hidden layer in ELM-AE could be set arbitrarily, e.g. a higher-dimension hidden layer could lower the random effect in feature learning and enhance features representation ability. The other is enhancing features nonlinear ability, since features are learned directly from the nonlinear outputs of hidden layer. Finally, comparison experiments on numerical and image datasets are implemented in this paper to verify the superior performance of the proposed ELM-AE in this paper.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available