4.7 Article

DropTrack-Automatic droplet tracking with YOLOv5 and DeepSORT for microfluidic applications

Journal

PHYSICS OF FLUIDS
Volume 34, Issue 8, Pages -

Publisher

AIP Publishing
DOI: 10.1063/5.0097597

Keywords

-

Funding

  1. European Research Council [FP/2014-2020]
  2. ERC Grant [739964]
  3. National Science Center within Sonata Bis program [2019/34/E/ST8/00411]
  4. PMW program of the Minister of Science and Higher Education in the years 2020-2024 [5005/H2020-MSCA-COFUND/2019/2]
  5. European Union [847413]
  6. European Research Council (ERC) [739964] Funding Source: European Research Council (ERC)

Ask authors/readers for more resources

Deep neural networks are powerful tools for data analysis in microfluidic systems, particularly in droplet counting and tracking. This study combines the YOLO and DeepSORT algorithms to create the image analysis tool DropTrack for droplet tracking in microfluidic experiments. Training the YOLO network with hybrid datasets improves the accuracy of droplet detection and counting in real experimental videos, while reducing the labor-intensive image annotation work. DropTrack's performance is evaluated based on mean average precision, mean squared error, and image analysis speed for droplet tracking.
Deep neural networks are rapidly emerging as data analysis tools, often outperforming the conventional techniques used in complex microfluidic systems. One fundamental analysis frequently desired in microfluidic experiments is counting and tracking the droplets. Specifically, droplet tracking in dense emulsions is challenging due to inherently small droplets moving in tightly packed configurations. Sometimes, the individual droplets in these dense clusters are hard to resolve, even for a human observer. Here, two deep learning-based cutting-edge algorithms for object detection [you only look once (YOLO)] and object tracking (DeepSORT) are combined into a single image analysis tool, DropTrack, to track droplets in the microfluidic experiments. DropTrack analyzes input microfluidic experimental videos, extracts droplets' trajectories, and infers other observables of interest, such as droplet numbers. Training an object detector network for droplet recognition with manually annotated images is a labor-intensive task and a persistent bottleneck. In this work, this problem is partly resolved by training many object detector networks (YOLOv5) with several hybrid datasets containing real and synthetic images. We present an analysis of a double emulsion experiment as a case study to measure DropTrack's performance. For our test case, the YOLO network trained by combining 40% real images and 60% synthetic images yields the best accuracy in droplet detection and droplet counting in real experimental videos. Also, this strategy reduces labor-intensive image annotation work by 60%. DropTrack's performance is measured in terms of mean average precision of droplet detection, mean squared error in counting the droplets, and image analysis speed for inferring droplets' trajectories. The fastest configuration of DropTrack can detect and track the droplets at approximately 30 frames per second, well within the standards for a real-time image analysis. Published under an exclusive license by AIP Publishing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available