4.5 Article

Deep active contours using locally controlled distance vector flow

Journal

SIGNAL IMAGE AND VIDEO PROCESSING
Volume 16, Issue 7, Pages 1773-1781

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s11760-022-02134-1

Keywords

Image segmentation; Active contours models; Convolutional neural networks; Distance transform; Capture range

Ask authors/readers for more resources

ACM is widely used in computer vision and image processing, with recent studies combining CNNs to address limitations associated with ACM. This study proposes a fully automatic image segmentation method to solve manual initialization, insufficient capture range, and convergence issues, achieving state-of-the-art results in several datasets.
Active contours model (ACM) has been extensively used in computer vision and image processing. In recent studies, convolutional neural networks (CNNs) have been combined with ACM replacing the user in the process of contour evolution and image segmentation to eliminate limitations associated with ACM dependence on energy functional parameters and initialization. However, prior studies did not aim for automatic initialization, which is addressed in this article. In addition to manual initialization, current methods are highly sensitive to the initial location and fail to delineate borders accurately. We propose a fully automatic image segmentation method to address problems of manual initialization, insufficient capture range, and poor convergence to boundaries, in addition to the problem of assignment of energy functional parameters. We train two CNNs, one of which generating ACM weighting parameters and the other generating a ground truth mask to extract distance transform (DT) and an initialization circle. DT is used to form a vector field pointing from each pixel of the image towards the closest ground truth boundary point. Vector magnitudes are equal to the Euclidean distance between each pixel and the closest ground truth boundary point. We evaluate our method on four publicly available datasets, including two building instance segmentation datasets, i.e., Vaihingen and Bing huts, and two mammography image datasets, INBreast and DDSM-BCRP. Our approach achieves state-of-the-art results in mean Intersection over Union (mIoU), Dice similarity coefficient and Boundary F-score (BoundF) with the values of 92.33%, 92.44%, and 86.57% for Vaihingen dataset, and 87.12%, 86.86%, and 66.91% for Bing huts dataset. We obtained the Dice similarity coefficient values of 94.23% and 90.89% for the INBreast and DDSM-BCRP, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available