Journal
NEURAL COMPUTATION
Volume 23, Issue 8, Pages 1935-1943Publisher
MIT PRESS
DOI: 10.1162/NECO_a_00152
Keywords
-
Funding
- DARPA [SyNAPSE HR0011-09-C-0002]
- Swartz Foundation
- Gatsby Foundation
- Rothschild Fellowship
- Brainpower for Israel foundation
- Swiss National Science Foundation [PBSKP3-133357]
- Swiss National Science Foundation (SNF) [PBSKP3_133357] Funding Source: Swiss National Science Foundation (SNF)
Ask authors/readers for more resources
The perceptron is a simple supervised algorithm to train a linear classifier that has been analyzed and used extensively. The classifier separates the data into two groups using a decision hyperplane, with the margin between the data and the hyperplane determining the classifier's ability to generalize and its robustness to input noise. Exact results for the maximal size of the separating margin are known for specific input distributions, and bounds exist for arbitrary distributions, but both rely on lengthy statistical mechanics calculations carried out in the limit of infinite input size. Here we present a short analysis of perceptron classification using singular value decomposition. We provide a simple derivation of a lower bound on the margin and an explicit formula for the perceptron weights that converges to the optimal result for large separating margins.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available