SUST Repository

Arabic Sign Language Recognition Using Kinect Camera

Show simple item record

dc.contributor.author Osman, Safaa Osman Abuzaid
dc.contributor.author Hamad, Fatima Muzamil Mustafa
dc.contributor.author Elamein, Mona Omer Elkhider
dc.contributor.author Supervisor-, Mohammed Yagoub
dc.date.accessioned 2017-12-21T07:27:51Z
dc.date.available 2017-12-21T07:27:51Z
dc.date.issued 2017-10-01
dc.identifier.citation Osman, Safaa Osman Abuzaid.Arabic Sign Language Recognition Using Kinect Camera/Safaa Osman Abuzaid Osman,Fatima Muzamil Mustafa Hamad,Mona Omer Elkhider Elamein;Mohammed Yagoub.-Khartoum: Sudan University of Science and Technology , College of Engineering , 2017.-95p. :ill;28cm.- Bachelors search en_US
dc.identifier.uri http://repository.sustech.edu/handle/123456789/19501
dc.description Bachelors search en_US
dc.description.abstract Sign language is a primary means of communication amongst the deaf peoples, most normal peoples do not understand sign language, and this creates a big gap in communication between them. There are various studies on sign language recognition (SLR) to solving this problem, most of them use accessories like colored gloves and accelerometers for data acquisition or used system that required complex environmental setup to operate. The Proposed work to recognize the Arabic sign language was design an integrated system to translate this Arabic sign language to Arabic text by using the Kinect sensor. In this thesis, the Kinect sensor is used to acquire the gestures from the signer in a real-time video format. Kinect provides 3D depth information that used to show points and sticks in the skeleton that related to the main joints of the signer by using Randomization Decision Forest and Mean Shift algorithms. After that, the movement of the skeleton joints is tracked. Then, Features from skeleton frame extracted by using calculations and Scale-Invariant Feature Transform algorithm, this features used to match with the predefined gestures set. If the current skeleton frame matches the predefined gesture pattern, the corresponding word for the gesture is shown as text, this called classification stage and it performed by Dynamic Time Warping algorithm. The system is able to recognize 6 Arabic words that make up the dataset and configure sentences that related to previous words with acceptable accuracy and precision. en_US
dc.description.sponsorship Sudan University of Science and Technology en_US
dc.language.iso en en_US
dc.publisher Sudan University of Science and Technology en_US
dc.subject Kinect Camera en_US
dc.subject Arabic Sign Language Recognition en_US
dc.subject Sign Language en_US
dc.title Arabic Sign Language Recognition Using Kinect Camera en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Share

Search SUST


Browse

My Account