Journal
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING
Volume 8, Issue -, Pages 838-850Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCI.2022.3209936
Keywords
Low-count image reconstruction; non-convex optimization; poisson phase retrieval
Funding
- USPHS [HG006139, GM53275]
- NSF [IIS 1838179]
- NIH [R01 EB022075]
Ask authors/readers for more resources
This paper proposes novel phase retrieval algorithms for maximum likelihood estimation in low-count scenarios with independent Poisson-distributed measurements. The algorithms utilize a modified Wirtinger flow with step sizes based on observed Fisher information and a novel curvature for majorize-minimize algorithms. Simulation experiments demonstrate that the proposed algorithms outperform existing optimization methods in terms of reconstruction quality and convergence speed.
This paper proposes novel phase retrieval algorithms for maximum likelihood (ML) estimation from measurements following independent Poisson distributions in very low-count regimes, e.g., 0.25 photon per pixel. Specifically, we propose a modified Wirtinger flow (WF) algorithm using a step size based on the observed Fisher information. This approach eliminates all parameter tuning except the number of iterations. We also propose a novel curvature for majorize-minimize (MM) algorithms with a quadratic majorizer. We show theoretically that our proposed curvature is sharper than the curvature derived from the supremum of the second derivative of the Poisson ML cost function. We compare the proposed algorithms (WF, MM) with existing optimization methods, including WF using other step-size schemes, quasi-Newton methods and alternating direction method of multipliers (ADMM) algorithms, under a variety of experimental settings. Simulation experiments with a random Gaussian matrix, a canonical discrete Fourier transform (DFT) matrix, a masked DFT matrix and an empirical transmission matrix demonstrate the following. 1) As expected, algorithms based on the Poisson ML model consistently produce higher quality reconstructions than algorithms derived from Gaussian noise ML models when applied to low-count data. 2) For unregularized cases, our proposed WF algorithm with Fisher information for step size converges faster than other WF methods, e.g., WF with empirical step size, backtracking line search, and optimal step size for the Gaussian noise model; it also converges faster than the quasi-Newton method. 3) In regularized cases, our proposed WF algorithm converges faster than WF with backtracking line search, quasi-Newton, MM and ADMM.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available