4.7 Article

Learning-Based Visual-Strain Fusion for Eye-in-Hand Continuum Robot Pose Estimation and Control

Journal

IEEE TRANSACTIONS ON ROBOTICS
Volume 39, Issue 3, Pages 2448-2467

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TRO.2023.3240556

Keywords

Robot sensing systems; Robots; Sensors; Cameras; Pose estimation; Robot vision systems; Robot kinematics; Camera pose estimation; fiber Bragg grating (FBG); hybrid control; online learning; visual-strain fusion

Categories

Ask authors/readers for more resources

Image processing enhances the practical value of the eye-in-hand camera for quantitative measurement. This article proposes a fusion of visual information and sparse strain data collected from fiber Bragg gratings to improve continuum robot pose estimation. The integration of the proposed F-emp pose estimation method reduces sensing limitations caused by visual obstacles and lighting variations. A hybrid controller combining kinematics and data-driven algorithms achieves fast convergence and high accuracy using the fused pose feedback. The online-learning error compensator significantly improves target tracking performance.
Image processing has significantly extended the practical value of the eye-in-hand camera, enabling and promoting its applications for quantitative measurement. However, fully vision-based pose estimation methods sometimes encounter difficulties in handling cases with deficient features. In this article, we fuse visual information with the sparse strain data collected from a single-core fiber inscribed with fiber Bragg gratings (FBGs) to facilitate continuum robot pose estimation. An improved extreme learning machine algorithm with selective training data updates is implemented to establish and refine the FBG-empowered (F-emp) pose estimator online. The integration of F-emp pose estimation can improve sensing robustness by reducing the number of times that visual tracking is lost given moving visual obstacles and varying lighting. In particular, this integration solves pose estimation failures under full occlusion of the tracked features or complete darkness. Utilizing the fused pose feedback, a hybrid controller incorporating kinematics and data-driven algorithms is proposed to accomplish fast convergence with high accuracy. The online-learning error compensator can improve the target tracking performance with a 52.3%-90.1% error reduction compared with constant-curvature model-based control, without requiring fine model-parameter tuning and prior data acquisition.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available