3.8 Proceedings Paper

Variational Networks: Connecting Variational Methods and Deep Learning

期刊

PATTERN RECOGNITION (GCPR 2017)
卷 10496, 期 -, 页码 281-293

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-319-66709-6_23

关键词

-

资金

  1. Austrian Science Fund (FWF) under the START project BIVISION [Y729]
  2. European Research Council under the Horizon 2020 program, ERC starting grant HOMOVIS [640156]
  3. Austrian Science Fund (FWF) [Y729] Funding Source: Austrian Science Fund (FWF)

向作者/读者索取更多资源

In this paper, we introduce variational networks (VNs) for image reconstruction. VNs are fully learned models based on the framework of incremental proximal gradient methods. They provide a natural transition between classical variational methods and state-of-the-art residual neural networks. Due to their incremental nature, VNs are very efficient, but only approximately minimize the underlying variational model. Surprisingly, in our numerical experiments on image reconstruction problems it turns out that giving up exact minimization leads to a consistent performance increase, in particular in the case of convex models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据