4.2 Article

PixISegNet: pixel-level iris segmentation network using convolutional encoder-decoder with stacked hourglass bottleneck

Journal

IET BIOMETRICS
Volume 9, Issue 1, Pages 11-24

Publisher

WILEY
DOI: 10.1049/iet-bmt.2019.0025

Keywords

image segmentation; iris recognition; feature extraction; image coding; decoding; convolutional neural nets; entropy codes; image classification; image restoration; image matching; siamese matching network; salient iris features; pixel-level iris segmentation network; stacked hourglass bottleneck; nonregular reflections; deep convolutional neural network; stacked hourglass network; cross-entropy loss; pixel-to-pixel classification loss; deep convolutional NN; encoder-decoder; image segmentation performance; content loss optimisation; hyper-parameterisation; Iris-DenseNet framework; iris image data sets; iris ROI image segmentation algorithm; biometric segmentation research; multiscale-multiorientation training; Pix-SegNet; train-once-test-all strategy; TOTA strategy; CASIA V3; 0 interval data sets; IIT-D interval data sets; UBIRIS-V2 interval data sets

Ask authors/readers for more resources

In this paper, we present a new iris ROI segmentation algorithm using a deep convolutional neural network (NN) to achieve the state-of-the-art segmentation performance on well-known iris image data sets. The authors' model surpasses the performance of state-of-the-art Iris DenseNet framework by applying several strategies, including multi-scale/ multi-orientation training, model training from scratch, and proper hyper-parameterisation of crucial parameters. The proposed PixISegNet consists of an autoencoder which primarily uses long and short skip connections and a stacked hourglass network between encoder and decoder. There is a continuous scale up-down in stacked hourglass networks, which helps in extracting features at multiple scales and robustly segments the iris even in an occluded environment. Furthermore, cross-entropy loss and content loss optimise the proposed model. The content loss considers the high-level features, thus operating at a different scale of abstraction, which compliments the cross-entropy loss, which considers pixel-to-pixel classification loss. Additionally, they have checked the robustness of the proposed network by rotating images to certain degrees with a change in the aspect ratio along with blurring and a change in contrast. Experimental results on the various iris characteristics demonstrate the superiority of the proposed method over state-of-the-art iris segmentation methods considered in this study. In order to demonstrate the network generalisation, they deploy a very stringent TOTA (i.e. train-once-test-all) strategy. Their proposed method achieves $E_1$E1 scores of 0.00672, 0.00916 and 0.00117 on UBIRIS-V2, IIT-D and CASIA V3.0 Interval data sets, respectively. Moreover, such a deep convolutional NN for segmentation when included in an end-to-end iris recognition system with a siamese based matching network will augment the performance of the siamese network.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available