Journal
NATURE MEDICINE
Volume 28, Issue 1, Pages 154-+Publisher
NATURE PORTFOLIO
DOI: 10.1038/s41591-021-01620-2
Keywords
-
Funding
- Dutch Cancer Society [KUN 2015-7970]
- Netherlands Organization for Scientific Research [016.186.152]
- Swedish Research Council [201901466, 2020-00692]
- Swedish Cancer Society (CAN) [2018/741]
- Swedish eScience Research Center
- Ake Wiberg Foundation
- Prostatacancerforbundet
- Academy of Finland [341967, 335976]
- Cancer Foundation Finland
- Google LLC
- MICCAI board challenge working group
- Verily Life Sciences
- EIT Health
- Karolinska Institutet
- MICCAI 2020 satellite event team
- ERAPerMed [334782]
- Vinnova [2020-00692] Funding Source: Vinnova
- Swedish Research Council [2020-00692] Funding Source: Swedish Research Council
- Academy of Finland (AKA) [335976] Funding Source: Academy of Finland (AKA)
Ask authors/readers for more resources
The PANDA challenge is the largest histopathology competition to date, aiming to catalyze the development of reproducible AI algorithms for Gleason grading in prostate cancer. The submitted algorithms achieved pathologist-level performance and their diversity and generalization were validated through cross-continental cohorts.
Through a community-driven competition, the PANDA challenge provides a curated diverse dataset and a catalog of models for prostate cancer pathology, and represents a blueprint for evaluating AI algorithms in digital pathology. Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation. With this in mind, we organized the PANDA challenge-the largest histopathology competition to date, joined by 1,290 developers-to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted kappa, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists. Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available