4.6 Article

Automatic robot Manoeuvres detection using computer vision and deep learning techniques: a perspective of internet of robotics things (IoRT)

Journal

MULTIMEDIA TOOLS AND APPLICATIONS
Volume 82, Issue 15, Pages 23251-23276

Publisher

SPRINGER
DOI: 10.1007/s11042-022-14253-5

Keywords

Geographical features; Visual features; Hybrid features; LSTM; Object segmentation; Visual sensor data; Video sequence

Ask authors/readers for more resources

This study demonstrates the construction and deployment of a revolutionary framework using computer vision and deep learning to minimize obstacles in real-time Internet of Things (IoT)-enabled robotics applications. By focusing on sensor-captured streams/images and geographical information, the Internet of Robotic Things (IoRT) is enabled to evolve. The framework utilizes efficient computer vision techniques and a deep learning classifier to anticipate and regulate robot motions, providing higher accuracy and reduced prediction duration. The proposed model exhibits improved efficiency and robustness compared to state-of-the-art approaches, with approximately 5% increased overall accuracy and 84% reduced computational complexity.
To minimize any impediments in real-time Internet of Things (IoT)-enabled robotics applications, this study demonstrated how to build and deploy a revolutionary framework using computer vision and deep learning. In contrast to robotic path planning algorithms based on geolocation. We focus on sensor-captured streams/images and geographical information to enable the Internet of Robotic Things (IoRT) to evolve. The application will collect real-time data from moving robotics at various situations and intervals and use it for research projects. The data collected in videos/image forms are delivered in the robotics application using visual sensor nodes. In this study, anticipating moving robot moves automatically early on can aid in issuing commands to monitor and regulate robots' future activities before they occur. To do so, we propose the framework using efficient computer vision techniques and a deep learning classifier. The computer vision methods are designed for frame quality improvement, object segmentation, and feature estimation. The Long-Term Short Memory (LSTM) classifier detects robot motions automatically from initial sequential features. We mainly designed the proposed model using an LSTM classifier to perform the earlier prediction from the initial sequential features of partial video frames and to overcome the problems of exploding and vanishing gradients. LSTM helps to reduce the prediction duration with higher accuracy. It also enables the central system of a certain robotic application to prevent collisions caused by impediments in the interior or outdoor situation. The simulation results utilizing publicly available research datasets demonstrate the proposed model's efficiency and robustness compared to state-of-the-art approaches. The overall accuracy of the proposed model has improved approximately by 5% and reduced computational complexity by 84% approximately.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available