4.6 Article

Fused features mining for depth-based hand gesture recognition to classify blind human communication

Journal

NEURAL COMPUTING & APPLICATIONS
Volume 28, Issue 11, Pages 3285-3294

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s00521-016-2244-5

Keywords

Hand gesture recognition; Depth data; DCT; Moment invariant; Fused features mining

Ask authors/readers for more resources

Gesture recognition and hand pose tracking are applicable techniques in human-computer interaction fields. Depth data obtained by depth cameras present a very informative explanation of the body or in particular hand pose that it can be used for more accurate gesture recognition systems. The hand detection and feature extraction process are very challenging task in the RGB images that they can be effectively dissolved with simple ways with depth data. However, depth data could be combined with the color information for more reliable recognition. A common hand gesture recognition system requires identifying the hand and its position or direction, extracting some useful features and applying a suitable machine-learning method to detect the performed gesture. This paper presents the novel fusion of the enhanced features for the classification of static signs of the sign language. It begins by explaining how the hand can be separated from the scene by depth data. Then, a combination feature extraction method is introduced for extracting some appropriate features of the images. Finally, an artificial neural network classifier is trained with these fused features and applied to critically analyze various descriptors performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available