4.7 Article

Visual interpretation of [18F]Florbetaben PET supported by deep learning-based estimation of amyloid burden

Journal

Publisher

SPRINGER
DOI: 10.1007/s00259-020-05044-x

Keywords

Alzheimer's disease; Amyloid PET; [F-18]Florbetaben; PET; Visual quantification; Deep learning

Funding

  1. National Research Foundation of Korea - Korea Government [NRF-2019K1A3A1A14065446, NRF-2019R1F1A1061412]
  2. National Research Foundation of Korea [2019K1A3A1A14065446] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This study analyzed the impact of deep learning-based one-step amyloid burden estimation system on inter-reader agreement and confidence of reading in clinical routine amyloid PET reading. The results showed that the deep learning system improved inter-reader agreement and increased confidence in visual interpretation.
Purpose Amyloid PET which has been widely used for noninvasive assessment of cortical amyloid burden is visually interpreted in the clinical setting. As a fast and easy-to-use visual interpretation support system, we analyze whether the deep learning-based end-to-end estimation of amyloid burden improves inter-reader agreement as well as the confidence of the visual reading. Methods A total of 121 clinical routines [F-18]Florbetaben PET images were collected for the randomized blind-reader study. The amyloid PET images were visually interpreted by three experts independently blind to other information. The readers qualitatively interpreted images without quantification at the first reading session. After more than 2-week interval, the readers additionally interpreted images with the quantification results provided by the deep learning system. The qualitative assessment was based on a 3-point BAPL score (1: no amyloid load, 2: minor amyloid load, and 3: significant amyloid load). The confidence score for each session was evaluated by a 3-point score (0: ambiguous, 1: probably, and 2: definite to decide). Results Inter-reader agreements for the visual reading based on a 3-point scale (BAPL score) calculated by Fleiss kappa coefficients were 0.46 and 0.76 for the visual reading without and with the deep learning system, respectively. For the two reading sessions, the confidence score of visual reading was improved at the visual reading session with the output (1.27 +/- 0.078 for visual reading-only session vs. 1.66 +/- 0.63 for a visual reading session with the deep learning system). Conclusion Our results highlight the impact of deep learning-based one-step amyloid burden estimation system on inter-reader agreement and confidence of reading when applied to clinical routine amyloid PET reading.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available