4.8 Article

Learning viewpoint invariant perceptual representations from cluttered images

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2005.105

Keywords

computational models of vision; neural nets

Funding

  1. Engineering and Physical Sciences Research Council [GR/S81339/01] Funding Source: researchfish

Ask authors/readers for more resources

In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available