4.8 Article

From simple innate biases to complex visual concepts

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.1207690109

Keywords

cognitive development; hand detection; unsupervised learning; visual cognition

Funding

  1. European Research Council (ERC)

Ask authors/readers for more resources

Early in development, infants learn to solve visual problems that are highly challenging for current computational methods. We present a model that deals with two fundamental problems in which the gap between computational difficulty and infant learning is particularly striking: learning to recognize hands and learning to recognize gaze direction. The model is shown a stream of natural videos and learns without any supervision to detect human hands by appearance and by context, as well as direction of gaze, in complex natural scenes. The algorithm is guided by an empirically motivated innate mechanism-the detection of mover events in dynamic images, which are the events of a moving image region causing a stationary region to move or change after contact. Mover events provide an internal teaching signal, which is shown to be more effective than alternative cues and sufficient for the efficient acquisition of hand and gaze representations. The implications go beyond the specific tasks, by showing how domain-specific proto concepts can guide the system to acquire meaningful concepts, which are significant to the observer but statistically inconspicuous in the sensory input.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available