3.8 Proceedings Paper

Pixel-level Class-Agnostic Object Detection using Texture Quantization

出版社

IEEE
DOI: 10.1109/SIBGRAPI55357.2022.9991762

关键词

-

资金

  1. National Council for Scientific and Technological Development - CNPq [309953/2019-7]
  2. Minas Gerais Research Foundation - FAPEMIG [PPM-00540-17]
  3. Coordination for the Improvement of Higher Education Personnel (CAPES) (Programa de Cooperacao Academica em Seguranca Publica e Ci encias Forenses) [88881.516265/2020-01]

向作者/读者索取更多资源

Object detection is a widely studied topic in computer vision research and is essential for systems involving visual scene understanding. As technology advances, more challenging issues in object detection, such as class-agnostic object detection, have emerged. This paper addresses the task of class-agnostic object detection using a convolutional network and texture graylevel quantization. The results show a significant improvement compared to the baseline in detecting objects without determining their classes.
Object detection is a widely explored topic within the computer vision research field mostly because it is necessary for almost every system containing some kind of visual scene understanding or interpretation. Significant advances throughout the last 40 years allowed us to evolve from early techniques based on template matching to modern deep detectors capable of detecting thousands of different classes of objects with reasonable performance. Nonetheless, as approaches kept improving, more challenging topics related to object detection have been proposed. Classic object detectors have to be trained with all classes that might be presented in the testing phase. However, this is a problem in real-world scenarios because it is impossible to know the whole domain of possible objects. Hence, a task has emerged called class-agnostic object detection that essentially detects objects without determining their classes. In this paper, we address this task using a convolutional network and texture graylevel quantization. Our results showed that our model could improve 2.1 percentage points (p.p.) from the best baseline on objects that were not annotated in the training phase.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据