4.7 Article

Deep learning-based correction of defocused fringe patterns for high-speed 3D measurement

Journal

ADVANCED ENGINEERING INFORMATICS
Volume 58, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.aei.2023.102221

Keywords

Fringe projection profilometry; Defocus fringe pattern correction; Multi-stage feature extraction; Global attention mechanism; Generative adversarial network; Transformer module

Ask authors/readers for more resources

This paper presents a multi-stage generative adversarial network with a self-attention mechanism to correct defocus fringe patterns and transform them into more ideal sinusoidal fringe patterns, thereby improving the accuracy of high-speed 3D measurements.
Digital fringe projection profilometry often faces a trade-off between measurement accuracy and efficiency. Defocus technology is commonly employed to address this challenge and improve the efficiency of high-speed three-dimensional (3D) measurement. This technology uses 1-bit binary fringe patterns instead of traditional 8-bit sinusoidal patterns, but accurately measuring 3D shapes with both high speed and accuracy remains a challenge due to defocus errors. These errors are introduced by the manual adjustment of lens focal length and reduce both fringe pattern quality and measurement accuracy. To overcome this limitation, we propose a multistage generative adversarial network with a self-attention mechanism to correct inaccurate fringe patterns and transform them into more ideal sinusoidal fringe patterns. Our generation network comprises a multi-stage feature extraction network with a self-attention mechanism and an encoder-decoder network. A multi-stage network integrating residual and transformer modules is constructed to mine global feature information. The self-attention mechanism accurately detects key areas for correction, and the encoder-decoder network generates rectified sinusoidal fringe patterns by combining the feature information with the attention area. We use a discriminative network to evaluate whether the output of the generative network is good enough to be true. In our experiments, we considered different fringe widths and measured objects of various types and colors. The results show that our proposed method improves the quality of defocus fringe patterns and the accuracy of subsequent 3D reconstruction compared to existing direct defocus methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available