4.3 Article

Multi-modal deep network for RGB-D segmentation of clothes

期刊

ELECTRONICS LETTERS
卷 56, 期 9, 页码 432-434

出版社

WILEY
DOI: 10.1049/el.2019.4150

关键词

image fusion; learning (artificial intelligence); image segmentation; image colour analysis; synthetic data; real-world data; multimodal deep network; RGB-D segmentation; clothes; deep learning; semantic segmentation; synthetic dataset; different clothing styles; semantic classes; data generation pipeline; depth images; ground-truth label maps; novel multimodal encoder-ecoder convolutional network; depth modalities; multimodal features; trained fusion modules; multiscale atrous convolutions

资金

  1. [BRGRD24]

向作者/读者索取更多资源

In this Letter, the authors propose a deep learning based method to perform semantic segmentation of clothes from RGB-D images of people. First, they present a synthetic dataset containing more than 50,000 RGB-D samples of characters in different clothing styles, featuring various poses and environments for a total of nine semantic classes. The proposed data generation pipeline allows for fast production of RGB, depth images and ground-truth label maps. Secondly, a novel multi-modal encoder-ecoder convolutional network is proposed which operates on RGB and depth modalities. Multi-modal features are merged using trained fusion modules which use multi-scale atrous convolutions in the fusion process. The method is numerically evaluated on synthetic data and visually assessed on real-world data. The experiments demonstrate the efficiency of the proposed model over existing methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据