Integration of Gesture and Posture Modalities for Interpreting Meaningful Expressions
Sign language recognition from hand motion or hand
posture is an active area in gesture recognition research
for Human Computer Interaction (HCI). Hand gesture
recognition has many applications such as: Sign Lan-
guage Recognition, Communication in Video Confer-
ence, using a finger as a pointer for selecting options
from menu, interacting with a computer by easy way
for children. Over the last few years, many methods
for hand gesture recognition have been proposed. These methods differ from one another in their models; Neu-
ral Network, Syntactical Analysis and Hidden Markov
Models (HMM). Since HMM are used widely in hand-
writing and speech recognition, we develop a method to
recognize the alphabets from a single hand motion us-
ing Hidden Markov Models. The gesture recognition for
alphabets is based on three main stages; preprocessing,
feature extraction and classification. In preprocessing
stage, colour and depth information are used to detect
both hands and face in connection with morphological
operation. After the detection of the hand, the tracking
will take place in a further step in order to determine
the motion trajectory; so-called gesture path. The sec-
ond stage, feature extraction, enhances the gesture path
which gives us a pure path and also determines the ori-
entation between the center of gravity and each point
in a pure path. Thereby, the orientation is quantized to
give a discrete vector that is used as input to HMM. In
the final stage, the gesture of alphabets is recognized by
using Left-Right Banded model (LRB), in conjunction
with Baum-Welch algorithm (BW) for training the pa-
rameters of HMM. Therefore, the best path is obtained
by Viterbi algorithm using a gesture database. In our
experiment, 520 trained gestures are used for training
and also 260 tested gestures for testing. Our method
recognizes the alphabets from A to Z and achieves an
average recognition rate of 92.3%.
(Contact: Prof. Al-Hamadi)