Journal
IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 18, Issue 7, Pages 1512-1523Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2009.2019809
Keywords
Color naming; image annotation; image retrieval; probabilistic latent semantic analysis
Funding
- Marie Curie European Reintegration Grant
- European funded CLASS
- Spanish Ministry of Science [CSD2007-00018]
- Ramon y Cajal Program
Ask authors/readers for more resources
Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labeled color chips. These color chips are labeled with color names within a well-defined experimental setup by human test subjects. However, naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labeling real-world images with color names, we use Google Image to collect a data set. Due to the limitations of Google Image, this data set contains a substantial quantity of wrongly labeled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labeled color chips for both image retrieval and image annotation.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available