3.8 Proceedings Paper

TURNIP: TIME-SERIES U-NET WITH RECURRENCE FOR NIR IMAGING PPG

Publisher

IEEE
DOI: 10.1109/ICIP42928.2021.9506663

Keywords

Human monitoring; vital signs; remote PPG; imaging PPG; deep learning

Ask authors/readers for more resources

Imaging photoplethysmography (iPPG) is a method used to estimate a person's pulse waveform by processing a video of their face, and in situations with insufficient visible spectrum illumination, a modular framework with a novel time-series U-net architecture can be used for heartbeat signal estimation. The proposed method outperforms existing models on challenging datasets containing monochromatic NIR videos taken in different conditions.
Imaging photoplethysmography (iPPG) is the process of estimating the waveform of a person's pulse by processing a video of their face to detect minute color or intensity changes in the skin. Typically, iPPG methods use three-channel RGB video to address challenges due to motion. In situations such as driving, however, illumination in the visible spectrum is often quickly varying (e.g., daytime driving through shadows of trees and buildings) or insufficient (e.g., night driving). In such cases, a practical alternative is to use active illumination and bandpass-filtering from a monochromatic near-infrared (NIR) light source and camera. Contrary to learning-based iPPG solutions designed for multi-channel RGB, previous work in single-channel NIR iPPG has been based on hand-crafted models (with only a few manually tuned parameters), exploiting the sparsity of the PPG signal in the frequency domain. In contrast, we propose a modular framework for iPPG estimation of the heartbeat signal, in which the first module extracts a time-series signal from monochromatic NIR face video. The second module consists of a novel time-series U-net architecture in which a GRU (gated recurrent unit) network has been added to the passthrough layers. We test our approach on the challenging MR-NIRP Car Dataset, which consists of monochromatic NIR videos taken in both stationary and driving conditions. Our model's iPPG estimation performance on NIR video outperforms both the state-of-the-art model-based method and a recent end-toend deep learning method that we adapted to monochromatic video.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available