Abstract:
A deaf-dumb individual always uses gestures to convey his/her ideas to others.
However, it is always difficult for a normal person to understand this
gesture language. The basic objective of this project is to develop a computer
based system to recognize 26 gestures from American Sign Language
(ASL) using MATLAB, which will enable deaf-dumb individuals significantly
to communicate with all other people using their natural hand gestures. The
proposed system in this thesis is composed of four modules, which are preprocessing
and hand segmentation, feature extraction, sign recognition and
text of sign voice conversion. Segmentation is done by converting image to
Hue-Saturation-Value (HSV) formate and using color threshold APP. Blob
features are extracted by using Bag of feature that is used the Speed Up
Robust Features (SURF) algorithm. Furthermore, the K-Nearest Neighbor
(KNN) and Support Vector Machine (SVM) algorithms are used for gesture
recognition. Finally, the Recognized gesture is converted into voice format.
To make the system more user friendly, a Graphical User Interface (GUI) is
designed. To train and test the proposed model, a self-collected dataset for
ASL was prepared using hand gestures from both male and female volunteers,
who have different ages and skin color in different background and postures
by an ordinary phone camera. The implemented model demonstrates that the
average accuracy from evaluation set is (89%), whereas the average accuracy
obtained from test set is (84.6%).