4.3 Article

A Triplet-loss Dilated Residual Network for High-Resolution Representation Learning in Image Retrieval

出版社

SPRINGER
DOI: 10.1007/s11265-023-01865-9

关键词

Image retrieval; Localization; Dilated residual convolutional networks; Triplet-based neural networks

向作者/读者索取更多资源

Content-based image retrieval is the process of retrieving a subset of images based on visual contents, and this paper presents a simple and efficient image retrieval system that offers acceptable accuracy. The proposed method uses a dilated residual convolutional neural network with triplet loss to extract high-resolution representations. To enhance robustness, candidate regions of interest are obtained from each feature map and Generalized-Mean pooling is applied. Experimental results on challenging datasets show high accuracy.
Content-based image retrieval is the process of retrieving a subset of images from an extensive image gallery based on visual contents, such as color, shape or spatial relations, and texture. In some applications, such as localization, image retrieval is employed as the initial step. In such cases, the accuracy of the top-retrieved images significantly affects the overall system accuracy. The current paper introduces a simple yet efficient image retrieval system with a fewer trainable parameters, which offers acceptable accuracy in top-retrieved images. The proposed method benefits from a dilated residual convolutional neural network with triplet loss. Experimental evaluations show that this model can extract richer information (i.e., high-resolution representations) by enlarging the receptive field, thus improving image retrieval accuracy without increasing the depth or complexity of the model. To enhance the extracted representations' robustness, the current research obtains candidate regions of interest from each feature map and applies Generalized-Mean pooling to the regions. As the choice of triplets in a triplet-based network affects the model training, we employ a triplet online mining method. We test the performance of the proposed method under various configurations on two of the challenging image-retrieval datasets, namely Revisited Paris6k (RPar) and UKBench. The experimental results show an accuracy of 94.54 and 80.23 (mean precision at rank 10) in the RPar medium and hard modes and 3.86 (recall at rank 4) in the UKBench dataset, respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据