4.6 Article

Robotic Cross-Platform Sensor Fusion and Augmented Visualization for Large Indoor Space Reality Capture

Journal

Publisher

ASCE-AMER SOC CIVIL ENGINEERS
DOI: 10.1061/(ASCE)CP.1943-5487.0001047

Keywords

Quadrupedal robot; Reality capture; Simultaneous localization and mapping (SLAM); Augmented reality

Funding

  1. National Science Foundation (NSF) [2033592]
  2. National Institute of Standards and Technology (NIST) [70NANB21H045]
  3. Innovation and Technology Ecosystems
  4. Dir for Tech, Innovation, & Partnerships [2033592] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper discusses the necessity and workflow of utilizing two different types of scanning sensors paired with a quadrupedal ground robot to investigate and create a digital twin model of a large indoor space. The proposed workflow provides a hierarchical data fusion strategy to integrate the advantages of distinct sensing methods.
The advancement in sensors, robotics, and artificial intelligence has enabled a series of methods such as simultaneous localization and mapping (SLAM), semantic segmentation, and point cloud registration to help with the reality capture process. To completely investigate an unknown indoor space, obtaining a general spatial comprehension as well as detailed scene reconstruction for a digital twin model requires a deeper insight into the characteristics of different ranging sensors, as well as corresponding techniques to combine data from distinct systems. This paper discusses the necessity and workflow of utilizing two distinct types of scanning sensors, including depth camera and light detection and ranging sensor (LiDAR), paired with a quadrupedal ground robot to obtain spatial data of a large, complex indoor space. A digital twin model was built in real time with two SLAM methods and then consolidated with the geometric feature extraction methods of fast point feature histograms (FPFH) and fast global registration. Finally, the reconstructed scene was streamed to a HoloLens 2 headset to create an illusion of seeing through walls. Results showed that both the depth camera and LiDAR could handle a large space reality capture with both required coverage and fidelity with textural information. As a result, the proposed workflow and analytical pipeline provides a hierarchical data fusion strategy to integrate the advantages of distinct sensing methods and to carry out a complete indoor investigation. It also validates the feasibility of robot-assisted reality capture in larger spaces.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available