Journal
2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
Volume -, Issue -, Pages 2290-2297Publisher
IEEE
Keywords
-
Categories
Funding
- Australian Research Fellowship from the Australian Research Council [DP1093425]
- Future Fellowship from the Australian Research Council [FT110101098]
- Australian Research Council [FT110101098] Funding Source: Australian Research Council
Ask authors/readers for more resources
We show that simple linear classification of pai-rwise products of convolutional features achieves near state-of-the- art performance on some standard labelled image databases. Specifically, we found test classification error rates on the MNIST handwritten digits image database of under 0.5%, and achieved under 19% and under 44% error rates on the CIFAR-10 and CIFAR-100 RGB image databases. Since the number of weights in such a classifier grows with the square of the number of features, we discuss how implementation of such a pair-wise products classifier can be achieved in an SLFN architecture where the hidden unit function is the simple quadratic nonlinearity: we can this a Quadratic Neural Network (QNN). We compare this method to setting the input weights in a QNN randomly, and find optimal performance can be achieved provided the hidden layer is sufficiently large. This analysis provides insight on why 'extremelearning machines' can achieve classification performance equal to or better than the use of backpropagation training.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available