4.7 Article

Transfer learning in medical image segmentation: New insights from analysis of the dynamics of model parameters and learned representations

期刊

ARTIFICIAL INTELLIGENCE IN MEDICINE
卷 116, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.artmed.2021.102078

关键词

Medical image segmentation; Fully convolutional neural networks; Deep learning; Transfer learning

资金

  1. National Institutes of Health (NIH) [R01 EB018988, R01 NS106030, R01 NS079788]

向作者/读者索取更多资源

This study critically evaluates the role of transfer learning in training fully convolutional networks for medical image segmentation. It highlights the importance of task and data dependency in improving segmentation accuracy, with observations on limited changes in convolutional filters during training and the potential for accurate FCNs by freezing the encoder section at random values. Additionally, the research challenges the common belief that the encoder section needs to learn data/task-specific representations, offering new insights and alternative training methods for FCNs.
We present a critical assessment of the role of transfer learning in training fully convolutional networks (FCNs) for medical image segmentation. We first show that although transfer learning reduces the training time on the target task, improvements in segmentation accuracy are highly task/data-dependent. Large improvements are observed only when the segmentation task is more challenging and the target training data is smaller. We shed light on these observations by investigating the impact of transfer learning on the evolution of model parameters and learned representations. We observe that convolutional filters change little during training and still look random at convergence. We further show that quite accurate FCNs can be built by freezing the encoder section of the network at random values and only training the decoder section. At least for medical image segmentation, this finding challenges the common belief that the encoder section needs to learn data/task-specific representations. We examine the evolution of FCN representations to gain a deeper insight into the effects of transfer learning on the training dynamics. Our analysis shows that although FCNs trained via transfer learning learn different representations than FCNs trained with random initialization, the variability among FCNs trained via transfer learning can be as high as that among FCNs trained with random initialization. Moreover, feature reuse is not restricted to the early encoder layers; rather, it can be more significant in deeper layers. These findings offer new insights and suggest alternative ways of training FCNs for medical image segmentation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据