3.8 Proceedings Paper

TURNIP: TIME-SERIES U-NET WITH RECURRENCE FOR NIR IMAGING PPG

出版社

IEEE
DOI: 10.1109/ICIP42928.2021.9506663

关键词

Human monitoring; vital signs; remote PPG; imaging PPG; deep learning

向作者/读者索取更多资源

Imaging photoplethysmography (iPPG) is a method used to estimate a person's pulse waveform by processing a video of their face, and in situations with insufficient visible spectrum illumination, a modular framework with a novel time-series U-net architecture can be used for heartbeat signal estimation. The proposed method outperforms existing models on challenging datasets containing monochromatic NIR videos taken in different conditions.
Imaging photoplethysmography (iPPG) is the process of estimating the waveform of a person's pulse by processing a video of their face to detect minute color or intensity changes in the skin. Typically, iPPG methods use three-channel RGB video to address challenges due to motion. In situations such as driving, however, illumination in the visible spectrum is often quickly varying (e.g., daytime driving through shadows of trees and buildings) or insufficient (e.g., night driving). In such cases, a practical alternative is to use active illumination and bandpass-filtering from a monochromatic near-infrared (NIR) light source and camera. Contrary to learning-based iPPG solutions designed for multi-channel RGB, previous work in single-channel NIR iPPG has been based on hand-crafted models (with only a few manually tuned parameters), exploiting the sparsity of the PPG signal in the frequency domain. In contrast, we propose a modular framework for iPPG estimation of the heartbeat signal, in which the first module extracts a time-series signal from monochromatic NIR face video. The second module consists of a novel time-series U-net architecture in which a GRU (gated recurrent unit) network has been added to the passthrough layers. We test our approach on the challenging MR-NIRP Car Dataset, which consists of monochromatic NIR videos taken in both stationary and driving conditions. Our model's iPPG estimation performance on NIR video outperforms both the state-of-the-art model-based method and a recent end-toend deep learning method that we adapted to monochromatic video.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据