4.7 Article

Visualizing the dynamic change of Ocular Response Analyzer waveform using Variational Autoencoder in association with the peripapillary retinal arteries angle

Journal

SCIENTIFIC REPORTS
Volume 10, Issue 1, Pages -

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41598-020-63601-8

Keywords

-

Funding

  1. Japan Science and Technology Agency (JST)-CREST [17K11418]
  2. Ministry of Education, Culture, Sports, Science and Technology of Japan

Ask authors/readers for more resources

The aim of the current study is to identify possible new Ocular Response Analyzer (ORA) waveform parameters related to changes of retinal structure/deformation, as measured by the peripapillary retinal arteries angle (PRAA), using a generative deep learning method of variational autoencoder (VAE). Fifty-four eyes of 52 subjects were enrolled. The PRAA was calculated from fundus photographs and was used to train a VAE model. By analyzing the ORA waveform reconstructed (noise filtered) using VAE, a novel ORA waveform parameter (Monot1-2), was introduced, representing the change in monotonicity between the first and second applanation peak of the waveform. The variables mostly related to the PRAA were identified from a set of 41 variables including age, axial length (AL), keratometry, ORA corneal hysteresis, ORA corneal resistant factor, 35 well established ORA waveform parameters, and Monot1-2, using a model selection method based on the second-order bias-corrected Akaike information criterion. The optimal model for PRAA was the AL and six ORA waveform parameters, including Monot1-2. This optimal model was significantly better than the model without Monot1-2 (p=0.0031, ANOVA). The current study suggested the value of a generative deep learning approach in discovering new useful parameters that may have clinical relevance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available