4.5 Article

American society of biomechanics early career achievement award 2020: Toward portable and modular biomechanics labs: How video and IMU fusion will change gait analysis

期刊

JOURNAL OF BIOMECHANICS
卷 129, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.jbiomech.2021.110650

关键词

Markerless motion tracking; Computer vision; Inertial measurement units; Wearables; Deep Learning

向作者/读者索取更多资源

The field of biomechanics is undergoing a turning point, with marker-based motion capture being replaced by portable and inexpensive hardware, rapidly improving markerless tracking algorithms, and open datasets that promote collaboration. Challenges hinder both inertial and vision-based motion tracking from reaching high accuracies, but leveraging their complementary strengths can lead to better solutions. By combining inertial and vision data, researchers aim to improve the accuracy of motion tracking and pose estimation in biomechanics applications.
The field of biomechanics is at a turning point, with marker-based motion capture set to be replaced by portable and inexpensive hardware, rapidly improving markerless tracking algorithms, and open datasets that will turn these new technologies into field-wide team projects. Despite progress, several challenges inhibit both inertial and vision-based motion tracking from reaching the high accuracies that many biomechanics applications require. Their complementary strengths, however, could be harnessed toward better solutions than those offered by either modality alone. The drift from inertial measurement units (IMUs) could be corrected by video data, while occlusions in videos could be corrected by inertial data. To expedite progress in this direction, we have collected the CMU Panoptic Dataset 2.0, which contains 86 subjects captured with 140 VGA cameras, 31 HD cameras, and 15 IMUs, performing on average 6.5 min of activities, including range of motion activities and tasks of daily living. To estimate ground-truth kinematics, we imposed simultaneous consistency with the video and IMU data. Three-dimensional joint centers were first computed by geometrically triangulating proposals from a convolutional neural network applied to each video independently. A statistical meshed model parametrized in terms of body shape and pose was then fit through a top-down optimization approach that enforced consistency with both the video-based joint centers and IMU data. As proof of concept, we used this dataset to benchmark pose estimation from a sparse set of sensors, showing that incorporation of complementary modalities is a promising frontier that can be further strengthened through physics-informed frameworks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据