4.8 Article

Large Scale Visual Food Recognition

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3237871

关键词

Image recognition; Visualization; Task analysis; Benchmark testing; Representation learning; Training; Semantics; Food dataset; food recognition; large-scale datasets; fine-grained recognition

向作者/读者索取更多资源

Food recognition is important for food choice and intake, and can support various food-oriented vision tasks. In this paper, we introduce Food2K, the largest food recognition dataset, and propose a deep progressive region enhancement network for food recognition. Experimental results on Food2K demonstrate the effectiveness of our proposed method, and its better generalization ability in various tasks. Food2K can be further explored for more food-relevant tasks and serve as a benchmark for large-scale fine-grained visual recognition.
Food recognition plays an important role in food choice and intake, which is essential to the health and well-being of humans. It is thus of importance to the computer vision community, and can further support many food-oriented vision and multimodal tasks, e.g., food detection and segmentation, cross-modal recipe retrieval and generation. Unfortunately, we have witnessed remarkable advancements in generic visual recognition for released large-scale datasets, yet largely lags in the food domain. In this paper, we introduce Food2K, which is the largest food recognition dataset with 2,000 categories and over 1 million images. Compared with existing food recognition datasets, Food2K bypasses them in both categories and images by one order of magnitude, and thus establishes a new challenging benchmark to develop advanced models for food visual representation learning. Furthermore, we propose a deep progressive region enhancement network for food recognition, which mainly consists of two components, namely progressive local feature learning and region feature enhancement. The former adopts improved progressive training to learn diverse and complementary local features, while the latter utilizes self-attention to incorporate richer context with multiple scales into local features for further local feature enhancement. Extensive experiments on Food2K demonstrate the effectiveness of our proposed method. More importantly, we have verified better generalization ability of Food2K in various tasks, including food image recognition, food image retrieval, cross-modal recipe retrieval, food detection and segmentation. Food2K can be further explored to benefit more food-relevant tasks including emerging and more complex ones (e.g., nutritional understanding of food), and the trained models on Food2K can be expected as backbones to improve the performance of more food-relevant tasks. We also hope Food2K can serve as a large scale fine-grained visual recognition benchmark, and contributes to the development of large scale fine-grained visual analysis.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据