Journal
PATTERN ANALYSIS AND APPLICATIONS
Volume 22, Issue 3, Pages 1221-1231Publisher
SPRINGER
DOI: 10.1007/s10044-018-0697-0
Keywords
Random projection; Neural networks; High-dimensional data; Sparse data
Categories
Funding
- Polish National Science Centre [DEC-2013/09/B/ST6/01549]
- HPC Infrastructure for Grand Challenges of Science and Engineering Project - European Regional Development Fund under the Innovative Economy Operational Programme
- PL-Grid Infrastructure
Ask authors/readers for more resources
Training deep neural networks (DNNs) on high-dimensional data with no spatial structure poses a major computational problem. It implies a network architecture with a huge input layer, which greatly increases the number of weights, often making the training infeasible. One solution to this problem is to reduce the dimensionality of the input space to a manageable size, and then train a deep network on a representation with fewer dimensions. Here, we focus on performing the dimensionality reduction step by randomly projecting the input data into a lower-dimensional space. Conceptually, this is equivalent to adding a random projection (RP) layer in front of the network. We study two variants of RP layers: one where the weights are fixed, and one where they are fine-tuned during network training. We evaluate the performance of DNNs with input layers constructed using several recently proposed RP schemes. These include: Gaussian, Achlioptas', Li's, subsampled randomized Hadamard transform (SRHT) and Count Sketch-based constructions. Our results demonstrate that DNNs with RP layer achieve competitive performance on high-dimensional real-world datasets. In particular, we show that SRHT and Count Sketch-based projections provide the best balance between the projection time and the network performance.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available