4.1 Article

Vision-Based Continuous Sign Language Spotting Using Gaussian Hidden Markov Model

Journal

IEEE SENSORS LETTERS
Volume 6, Issue 7, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LSENS.2022.3185181

Keywords

Hidden Markov models; Assistive technologies; Feature extraction; Videos; Gesture recognition; Face recognition; Shape; Sensor applications; continuous sign language; hidden Markov model (HMM); movement epenthesis; sign spotting

Ask authors/readers for more resources

A vision-based continuous SL spotting system is proposed in this study to separate meaningful signs from sign sequences using HMM and Viterbi algorithm, achieving a spotting rate of about 83%.
The presence of movement epenthesis (me) between two consecutive signs in a continuous sign language makes sign language recognition (SLR) a challenging task. In this letter, we propose a vision-based continuous SL spotting system, which separates the meaningful signs from the sign sequences by removing the me components from H.264/AVC compressed videos. The work is based on a two-state hidden Markov model (HMM) with Gaussian emission probability. The HMM is trained by the feature set extracted from the entire sign sequence video and finally, the hidden state-sequence is decoded using the Viterbi algorithm. From the decoded state-sequence, the sign spotting is done. The feature set comprises features extracted from the compressed domain as well as uncompressed domain analysis of the sign video. The video database is composed of American SL videos collected from Boston University database. From the experimental results, it is seen that the proposed system can spot the sign and me frames with a spotting rate of about 83%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available