4.8 Article

Objective comparison of particle tracking methods

Journal

NATURE METHODS
Volume 11, Issue 3, Pages 281-U247

Publisher

NATURE RESEARCH
DOI: 10.1038/nmeth.2808

Keywords

-

Funding

  1. Dutch Technology Foundation (STW) [10443]
  2. Agence Nationale de la Recherche (FranceBioImaging) [ANR-10-INBS-04-06, ANR-10-LABX-62-IBEID]
  3. Programme C'Nano Region IDF (France)
  4. US National Institutes of Health (NIH) [MH064070, MH071739]
  5. Spanish Ministry of Economy and Competitiveness [DPI2012-38090-C03-02]
  6. Czech Ministry of Education [1.07/2.3.00/30.0009]
  7. Swiss National Science Foundation (SNF) [CRSII3-132396/1]
  8. Swiss Federal Commission for Technology and Innovation (CTI) [9325.2-PFLS-LS]
  9. US National Institute of Neurological Disorders and Stroke [R01NS076709]
  10. German Federal Ministry of Education and Research (FORSYS project ViroQuant)
  11. European Commission (FP7 Project SysPatho)
  12. Fundamental Research Funds for the Central Universities
  13. Excellent Young Faculty Award (Zijin Plan) at Zhejiang University
  14. Ministry of Education (MOE) Key Laboratory of Biomedical Engineering
  15. Swedish Research Council (VR) [621-2011-5884]
  16. Inria
  17. PICT-IBiSA Institut Curie-CNRS
  18. French Life Imaging Microscopy Network (GDR) [2588-CNRS]
  19. European Commission (FP7 ICT Project MEMI)
  20. Dutch Technology Foundation (STW VICI) [10379]
  21. Swiss National Science Foundation (SNF) [CRSII3_132396] Funding Source: Swiss National Science Foundation (SNF)

Ask authors/readers for more resources

Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available