4.7 Article Data Paper

Crowdsourced MRI quality metrics and expert quality annotations for training of humans and machines

Journal

SCIENTIFIC DATA
Volume 6, Issue -, Pages -

Publisher

NATURE PUBLISHING GROUP
DOI: 10.1038/s41597-019-0035-4

Keywords

-

Funding

  1. Laura and John Arnold Foundation
  2. NIH [NBIB R01EB020740]
  3. NIMH [R24MH114705, R24MH117179, ZICMH002884, ZICMH002960]
  4. NINDS [U01NS103780]
  5. NATIONAL INSTITUTE OF MENTAL HEALTH [ZICMH002960, ZICMH002884] Funding Source: NIH RePORTER

Ask authors/readers for more resources

The neuroimaging community is steering towards increasingly large sample sizes, which are highly heterogeneous because they can only be acquired by multi-site consortia. The visual assessment of every imaging scan is a necessary quality control step, yet arduous and time-consuming. A sizeable body of evidence shows that images of low quality are a source of variability that may be comparable to the effect size under study. We present the MRIQC Web-API, an open crowdsourced database that collects image quality metrics extracted from MR images and corresponding manual assessments by experts. The database is rapidly growing, and currently contains over 100,000 records of image quality metrics of functional and anatomical MRIs of the human brain, and over 200 expert ratings. The resource is designed for researchers to share image quality metrics and annotations that can readily be reused in training human experts and machine learning algorithms. The ultimate goal of the database is to allow the development of fully automated quality control tools that outperform expert ratings in identifying subpar images.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available