4.6 Article

Joint learning of visual and spatial features for edit propagation from a single image

期刊

VISUAL COMPUTER
卷 36, 期 3, 页码 469-482

出版社

SPRINGER
DOI: 10.1007/s00371-019-01633-6

关键词

Image editing; Edit propagation; Deep neural network; Fully connected conditional random field

资金

  1. National Natural Science Foundations of P. R. China [61402053, 61602059, 61772087, 61802031]
  2. Scientific Research Fund of Education Department of Hunan Province [16C0046, 16A008]

向作者/读者索取更多资源

In this paper, we regard edit propagation as a multi-class classification problem and deep neural network (DNN) is used to solve the problem. We design a shallow and fully convolutional DNN that can be trained end-to-end. To achieve this, our method uses combinations of low-level visual features, which are extracted from the input image, and spatial features, which are computed through transforming user interactions, as input of the DNN, which efficiently performs a joint learning of visual and spatial features. We then train the DNN on many of such combinations in order to build a DNN-based pixel-level classifier. Our DNN is also equipped with patch-by-patch training and whole image estimation, speeding up learning and inference. Finally, we improve classification accuracy of the DNN by employing a fully connected conditional random field. Experimental results show that our method can respond to user interactions well and generate precise results compared with the state-of-art edit propagation approaches. Furthermore, we demonstrate our method on various applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据