Abstract:
Sign language is a primary means of communication amongst the deaf peoples, most normal peoples do not understand sign language, and this creates a big gap in communication between them. There are various studies on sign language recognition (SLR) to solving this problem, most of them use accessories like colored gloves and accelerometers for data acquisition or used system that required complex environmental setup to operate. The Proposed work to recognize the Arabic sign language was design an integrated system to translate this Arabic sign language to Arabic text by using the Kinect sensor. In this thesis, the Kinect sensor is used to acquire the gestures from the signer in a real-time video format. Kinect provides 3D depth information that used to show points and sticks in the skeleton that related to the main joints of the signer by using Randomization Decision Forest and Mean Shift algorithms. After that, the movement of the skeleton joints is tracked. Then, Features from skeleton frame extracted by using calculations and Scale-Invariant Feature Transform algorithm, this features used to match with the predefined gestures set. If the current skeleton frame matches the predefined gesture pattern, the corresponding word for the gesture is shown as text, this called classification stage and it performed by Dynamic Time Warping algorithm.
The system is able to recognize 6 Arabic words that make up the dataset and configure sentences that related to previous words with acceptable accuracy and precision.