Sign Language

The Sign Language is a method of communication for deaf-dumb people. This paper presents the Sign Language Recognition system capable of recognizing 26 gestures from the Indian Sign Language by using MATLAB. The proposed system having four modules such as: pre-processing and hand segmentation, feature extraction, sign recognition and sign to text and voice conversion. Segmentation is done by using image processing. Different features are extracted such as Eigen values and Eigen vectors which are used in recognition .The Principle Component Analysis (PCA) algorithm was used for gesture recognition and recognized gesture is converted into text and voice format. The proposed system helps to minimize communication barrier between deaf-dumb people and normal people. The sign language is an important method of communication for deaf-dumb persons. As sign language is well structured code gesture, each gesture has a meaning assigned to it. In the last several years there has been an increased interest among the researchers in the field of sign language recognition to introduce means of interaction from human 'human to human ' computer interaction. Deaf and Dumb people rely on sign language interpreters for communications. However, finding experienced and qualified interpreters for their day to day affairs throughout life period is a very difficult task and also unaffordable [1].

The propose system is able to recognize single handed gestures accurately bare human hands using a webcam which is MATLAB interface. The aim of this project is to recognize the gestures with highest accuracy and in least possible time and translate the alphabets of Indian Sign Language into corresponding text and voice in a vision based setup.
In vision-based approach, the architecture of the system is usually divided into two main parts. The first part is the feature extraction. This part should extract important features from the image using computer vision method or image processing such as background subtraction, hand tracking, and hierarchical feature characterization (shape, orientation and location).The second part is the recognizer. From the features already extracted and characterized, the recognizer should be able to learn the pattern from training data and recognize testing data correctly [2].
Al-Ahdal and Tahir [1] presented a novel method for designing SLR system based on EMG sensors with a data glove. This method is based
on electromyography signals recorded from hands muscles for allocating word boundaries for streams of words in continuous SLR.
Iwan Njoto Sandjaja and Nelson Marcos [2] proposed color gloves approach which extracts important features from the video using multi-color tracking algorithm.
Ibraheem and Khan [4] have reviewed various techniques for gesture recognition and recent gesture recognition approaches.
Ghotkar et al. [5] used Cam shift method and Hue, Saturation, Intensity (HSV) color for model for hand tracking and segmentation. For gesture recognition Genetic Algorithm is used.
Paulraj M P et al. [7] had developed a simple sign language recognition system that has been developed using skin color segmentation and Artificial Neural Network.
Block diagram of proposed system is shown in the Fig.1.Here our system takes the input hand gestures through the web camera in uniform white background. In this proposed method, 26 combinations of Indian sign are developed by using right hand stored in training data base. Pre-processing is done on these captured input gestures. Then the segmentation of hands is carried out to separate object and the background. The segmented hand image is represented using certain features. These features are used for gesture recognition using the PCA algorithm which gives optimized results. The final result obtained is converted into corresponding text and voice form.
The sign recognition procedure includes four major steps. They are a) Data Acquisition b) Pre-processing and segmentation c) Feature extraction d) Sign recognition and e) Sign to text, voice conversion.

Data Acquisition :

To achieve a high accuracy for sign recognition in sign language recognition system we use 260 images, 10 each of the 26 signs are used. These 260 images are included in training and testing database. The images are captured at a resolution of 380 x 420 pixels. The runtime images for test phase are captured using web camera. The images are captured in white background so as to avoid illumination effects. The images are captured at a specified distance (typically 1.5 ' 2 ft) between camera and signer. The distance is adjusted by the signer to get the required image clarity.

Image preprocessing and segmentation:

Preprocessing consist image acquisition, segmentation and morphological filtering methods. Then the Segmentation of hands is carried out to separate object and the background. Otsu algorithm is used for segmentation purpose. The segmented hand image is represented certain features. These features are further used for gesture recognition Morphological filtering techniques are used to remove noises from images so that we can get a smooth contour. The preprocessing operation is done on the stored database.

Feature Extraction:
Feature extraction is a method of reducing data dimensionality by encoding related information in a compressed representation and removing less discriminative data. Feature extraction is vital to gesture recognition performance. Therefore, the selection of which features to deal with and the extraction method are probably the most significant design decisions in hand motion and gesture recognition development. Here we used principal component as main features.

Sign Recognition

Sign reorganization using PCA is a dimensionality reduction technique based on extracting the desired number of principal components of the multi-dimensional data. The gesture recognition using PCA algorithm that involves two phases
' Training Phase
' Recognition Phase
During the training phase, each gesture is represented as a column vector. These gesture vectors are then normalized with respect to average gesture. Next, the algorithm finds the eigenvectors of the covariance matrix of normalized gestures by using a speed up technique that reduces the number of multiplications to be performed. Lastly, this eigenvector matrix then multiplied by each of the gesture vectors to obtain their corresponding gesture space projections.
In the recognition phase, a subject gesture is normalized with respect to the average gesture and then projected onto gesture space using the eigenvector matrix. Finally, Euclidean distance is computed between this projection and all known projections [7] [9].The minimum value of these comparisons is selected for recognition during the training phase. Finally, recognized sign is converted into appropriate text and voice which is displayed on GUI.


Training Phase
Each gesture in the database is represented as a column in a matrix A. The values in each of these columns represent the pixels of the gesture image and range from 0 to 255 for an 8-bit grayscale gesture image [9]:

''('(a11&'&[email protected]'&'&'@am1&'&amn))

Where, m = Size of image
n = Number of Gesture Images.

Average of the matrix A is calculated to normalize the matrix A. The average of matrix A is a column vector in which every element is the average of every gesture pixel values respectively.

Avg=(x1',x2,'''.' xm)

' x'_(i )='_(k=1)^n'a_ik

i = 1, 2, '''.'.., m

Next, the matrix is normalized by subtracting each column of Avg matrix from each column of matrix A [9].

''('(a_11'x1&'&a_1n'[email protected]'&'&'@a_(m1-xm)&'&a_(mn-xm) ))

Then compute the covariance matrix of ??, which is (A ) ?? ??A ??^( T)it reduces the size of the covariance matrix and calculated as:

L' (A ) ?? ??A ??^( T)

Next step is to obtain the eigenvectors of original matrix thus we need to calculate the eigenvectors of the covariance matrix L, let us say eigenvectors of the covariance matrix are V, with size of V is same as L.

Now calculate the eigenvectors of the original matrix after the calculation of V as follows:

'U" '" A '??^( T) ??V

Recognition Phase

We represent the test gesture as a column vector:

r ' ('([email protected]'@r_m ))

The target gesture is then normalized:

r=('(r_1&'&[email protected]'&'&'@r_m&'&r_m ))

Next, calculate the projection of test gesture to project the gesture on gesture space by the equation given below [9]:

'='U '^(T )?? r '
We then find the Euclidean distance between the targets projection and each of the projections in the database by the equation given below [9]:

'ED'_((i) )= '('_(j=1)^m'('(1,j) )('(i,j) ) )^2
i=1, 2 ,''''n.
m = total number of pixels in a gesture
n = number of gestures in the database

Next, we decide which gesture is recognized by selecting minimum Euclidean distance from the Euclidean distance vector 'ED'.

Finally, recognized sign is converted into text and voice alphabet.


The proposed procedure was implemented and tested with set of images. A fig.4 show the set of 26 images of single person is used for training database which are captured by web cam in white background. The preprocessing results of the same are shows in fig.5.
These preprocessed gesture taken as input for feature extraction. Minimum Euclidean distance is calculated between test and train image and gesture is recognized. Recognized gesture is converted into text, voice format and also respective features will be display on GUI screen. Figure 6 shows a snapshot of application working and detecting two different hand gestures for sign A and Sign C.

A Matlab based application performing hand gesture recognition for human' computer interaction using PCA technique was successfully implemented with accuracy comparable with those of recent contributions. The proposed method gives output in voice and text form that helps to eliminate the communication barrier between deaf-dumb and normal people. In future this work will be extended to all the phonemes in Marathi signs.

Source: Essay UK -

About this resource

This Science essay was submitted to us by a student in order to help you with your studies.

Search our content:

  • Download this page
  • Print this page
  • Search again

  • Word count:

    This page has approximately words.



    If you use part of this page in your own work, you need to provide a citation, as follows:

    Essay UK, Sign Language. Available from: <> [01-06-20].

    More information:

    If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal: