Vision Based Sign Language Identification System Using Facet Analysis

Bachelor Thesis, 2013

68 Pages, Grade: A+








1.1. Area Preface
1.2. Problem Statement
1.3. Project Objectives
1.4. Scope
1.5. Significance of the Study
1.6. Limitations


3.1. Data Acquisition
3.2. Input Video
3.3. Hand Detection
3.3.1. Skin Detection
3.3.2. Video Processing
3.3.3. Contour Extraction
3.4. Gesture Recognition Technique
3.4.1. Feature Collection
3.4.2. Shape Matching
3.4.3. Hu Invariant Moments Comparison
3.4.4. Recognition Results

4.1. Proposed System Modeling language
4.1.1. Use Case Diagram
4.1.2. Flow Charts
4.1.3. Sequence Diagram

5.1. System Requirements
5.1.1. Software Requirements
5.1.2. Hardware Requirements
5.2. System Description
5.2.1. Load Video
5.2.2. Hand Detection Skin Detection Steps Contours Processing Steps:
5.2.3. Gesture Recognition

6.1. Testing
6.1.1. Test Case 1
6.1.2. Test Case 2
6.1.3. Test Case 3
6.1.4. Test Case 4
6.1.5. Test Case 5
6.2. Results




Figure 1: Block Diagram

Figure 2: 7 Hu Invariant Moments

Figure 3: Use Case Diagram of Complete System

Figure 4: SLI System Flow Chart

Figure 5: Flow Chart for Hand Detection

Figure 6: Flow Chart for Skin Detection

Figure 7: Flow Chart for Pre-Processing of video Frames

Figure 8: Flow Chart for Finding Contours

Figure 9: Flow Chart for Shape Matching

Figure 10: Flow Chart for Hu Comparison

Figure 11: Flow Chart for Recognition

Figure 12: Sequence Diagram for the System

Figure 13: System's Interface

Figure 14: Load Video

Figure 15: Video Loaded

Figure 16: Binarized Frame

Figure 17: Contours with Bounding Rectangle

Figure 18: Detected Hand

Figure 19: Identified Sign

Figure 20: 1st Video

Figure 21: 2nd Video

Figure 22: Detected hand of 1st signer

Figure 23: Detected hand of 1st signer

Figure 24: Sign Identified 'A'

Figure 25: Template for “A”

Figure 26: Sign Identified 'E'

Figure 27: Template for "E"

Figure 28: 1. Sign Identified "D"

Figure 29: 2. Sign Identified "D"

Figure 30: 3. Sign Identified "D"

Figure 31: 4. Sign Identified "D"

Figure 32: 5. Sign Identified "D"

Figure 33: Sign Recognition Accuracy Chart


Table 1: Techniques used in different Sign Language recognition systems

Table 2 : Use Case 1

Table 3: Use Case 2

Table 4: Use Case 3

Table 5: Use Case 4

Table 6: Use Case 5

Table 7: Use Case 6

Table 8: Use Case 7

Table 9: Test Case 1

Table 10: Test Case 2

Table 11: Test Case 3

Table 12: Test Case 4

Table 13: Test Case 5

Table 14: Hand Detection Accuracy

Table 15: Summary of identified signs

Table 16: Sign Recognition Accuracy


My sincere gratitude goes to my supervisor Ms. Sana Yousuf for her timely follow-ups, exhaustive support and for helping me through the days when this project seemed impossibility until the end of this work. Also thanks to Ms. Sobia Khalid for her assistance as a co-supervisor. This thesis work could have not been completed without the effort of Ms. Sana Yousuf and Ms. Sobia Khalid

Thank you to all of my friends and family for their continuous support, understanding, valuable and unconditional suggestions. Without them I doubt this work would have been completed

Many other people deserve thanks for the contributions they have made to this research. The list is very comprehensive, and I greatly appreciate the efforts of everyone who has helped me.

Faryal Amber


This work is dedicated to my parents in appreciation of the love and support they have given to me. In particular I wish to thank them for encouraging me to consider this discipline of engineering when the opportunity presented itself. They always provided me with their utmost support in every field of my life. Their encouragement had been a great source of determination for me to achieve my goals in life.

Faryal Amber


The communication gap between the deaf and hearing population is clearly noticed. To make possible the communication between the Deaf and the hearing population and to overpass the gap in access to next generation Human Computer Interfaces, automated sign language analysis is highly crucial. Conversely, an enhanced solution is to build up a conversion system that translates a sign language gestures to text or speech. Exploration and experimentation of an efficient methodology based on facet features analysis. For a recognition system that can recognize gestures from video which can be used as a translation, A methodology has been proposed that extracts candidate hand gestures from sequence of video frames and collect hand features. The system has three separate parts namely: Hand Detection, Shape Matching and Hu moments comparison. The Hand Detection section detects hand through skin detection and by finding contours. It also includes processing of video frames. The procedure of shape matching is attained by comparing the histograms. The values of Hu moments of candidate hand region is identified using contour region analysis and compared to run matches and identify the particular sign language alphabet. Experimental analysis supports the efficiency of the proposed methodology on benchmark data.

Keywords : Contours, Skin Detection, Shape Matching, Gesture Recognition, Hu Moments Comparison, Sign Language Identification System


1.1. Area Preface

Common public can converse their views and thoughts to others through vocalizations. Sign language is the only way of contact method for the hearing impaired community. The communication between hearing impaired and hearing persons simply does not exist in the current situation. In order to facilitate communication between Deaf and hearing population and to bridge the gap in contact to next age group Human Computer Interfaces, automated sign language interpretation is highly essential. Hand gestures are beneficial in noisy settings when compared to speech instructions, in conditions where speech commands would be distressing, as well as for conveying spatial associations and quantitative information. These systems installed at public places like airports, banks, hospitals can ease the communication between the Deaf community and the hearing community. The systems may consist of Finger spelling, Isolated Signs, Continuous Sign Language and so on.

A sign is a type of non-verbal statement made with a component of the body and used as a substitute of unwritten statement or in amalgamation with it. Body and signs language is used by the majority of people when they chat, in accumulation to words. A sign language uses gestures as a replacement of sound to express the meaning using hand-shapes, orientation, direction and movement of the hands, arms or body, facial terminology and lip-molds. Contrary to trendy belief, sign language is not an international language. As with vocal languages, these differ from area to area. They are not entirely based on the spoken language in the country of origin.

Sign language consists of 3 major components [11]:

- Finger-spelling: letter by letter predict words
- Word level sign expressions: used for the greater part of communication
- non-manual features: mouth, tongue and body position and facial terminologies

For lots of deaf people, sign language is the standard way of communication. One problem is that hardly any people who are not themselves deaf ever learn to sign. This therefore increases the segregation of deaf people; they may be limited in many of their interactions to communicating only with other deaf people. It appears that technology might have a part to play here, if computers could be programmed to identify sign language and to convert it into another form such as synthesized speech or written text. The technology is now available to challenge this translation but there are substantial problems to be solved before it will become realism.

The research on Gesture recognition system can be classified into two types first is the use of electromechanical devices. This type of system affects the signer’s natural signing ability. The second category is classified into two types, one is the use of colored gloves and the other is not using any devices which might affect the signer’s natural signing ability. Sign language is interested because Problem is not yet solved, used for communication by many people and has huge lexicon, which makes the problem nontrivial.

1.2. Problem Statement

To make possible the communication between the visually impaired, Deaf and the hearing population and to overpass the gap in access to next generation Human Computer Interfaces, automated sign language analysis is highly crucial.

1.3. Project Objectives

The main focus of this study is to build up a recognition system for sign language interpretation using image processing and computer vision techniques.

1.4. Scope

Based on the time constraints, as well as the complex stage of sign language recognition for hearing-impaired community, the extent of this study has been lessened down and focuses on a few precise points. In this research, the proposed sign language identification system would be able to recognize signed alphabets using feature set analysis. The proposed research approach for recognition of appropriate and distinct vocabulary would be comparing and matching of features, would be anticipated.

1.5. Significance of the Study

The importance of this project is to help the hearing impaired persons to communicate with others, and help normal people to understand hearing-impaired people as well.

1.6. Limitations

- The system is not a real time web camera based system.
- The video database consists of videos that contain only hand.
- The skin color tone ranges between (0, 300,200) and (255, 185, 135).
- The proposed system covers only English alphabets.


In the literature, efforts to identify sign language automatically began to emerge in the 90s. Many investigators are trying to build up an automatic sign language recognition system in various sign languages. Earlier, variety of works has been carried out on various sign language recognition methods.

The review of literature is done to become familiar and identify the new techniques used in Sign Language Recognition Systems, how they are applied and at what stages. Literature review is based on the classification, feature extraction and image or video preprocessing techniques. A simple table is also developed for a summarized view of the methods used in research papers. 10 of the research papers have included from different journals and conferences. The following section gives the brief description and comparison of techniques which help in carrying out the thesis process.

P. V. V. Kishore and P. R. Kumar developed a system intended for recognizing a subset of the Indian sign language. The work was put into effect by training a Takagi-Sugeno-Kang or Sugeno Fuzzy Inference system. Fuzzy inference system was used because the output association methods are invariable or linear. The Sugeno fuzzy inference system consists of five important steps: fuzzification of input variables, applying fuzzy operator, calculating the rule weights, calculating the output level and lastly defuzzification [1]. The production of the fuzzy rule base is done by a subtractive clustering technique in sugeno fuzzy procedure for categorization of a video. In another research conducted by P. V. V. Kishore and P. R. Kumar gesture recognition of Indian Sign language was done under real time environment such as diverse lighting, different backgrounds and to make the system independent of the signer. The proposed system was trained with artificial network, the network's outputs and out the voice command appropriate to the trained sign [2]. S. Yacoob et al., research was intended to identify the gestures of English phonemes. A simple Neural Network model using Error Back Propagation is implemented to categorize the different gestures [3]. A. A. A. Youssif et al., proposed an Arabic Sign Language Recognition System which recognizes 20 isolated words from the Standard Arabic Sign Language. The proposed system is signer-independent. HMM, a probabilistic mold signifying a given process with a set of conditions and transition probabilities between the states is used for training and recognition phase [4]. The execution of the HMM has been carried out using the HTK toolkit for the ArSL system. HTK is a handy toolkit for structuring and working with Hidden Markov Models [4]. The work of L. P. Vargas et al., presented an image pattern identification system by means of Neural Network for the classification of sign language to hearing-impaired community. A Multilayer Neural Network trained with a Back Propagation technique was used in design. The structure of the network is formed from the input layer, hidden layer and output layer. The weights for NN are formerly loaded and stored in RAM memories. A hyperbolic tangent activation function was employed for the input and hidden layer neurons with 5 neurons each and a linear activation function was employed for one neuron at the output layer [5]. The system described by V.S.Kulkarni and Dr. S.D. Lokhande, was for visual recognition of every inert gestures of American Sign Language and alphabets using uncovered hands. Also the system is made background independent. The work on hand recognizes ASL stationary signs only. The important step is the classification stage. A 3-layer feed-forward back propagation neural network is implemented with supervised learning [6]. O.M. Fang et al., proposed hand gestures detection and recognition method intended for worldwide sign language. A neural network was implemented as an information foundation for sign language. The Sign to Voice system prototype was developed with feed forward neural network for two-sequence signs detection [7]. The technique proposed by Q. Chen et al., [8] is a two-level approach towards identification of hand signs within dynamic environment. This approach is concerned with the posture recognition with Haar-Like features and the AdaBoost Learning Algorithm. The AdaBoost learning algorithms to a great extent increases the performance and a strong classifier is build up by merging a sequence of weedy classifiers. Different hand postures are classified using a parallel cascade structure which is based on the trained cascade classifiers. SCFG is applied to investigate the syntactic hand structure based on the identified postures for sophisticated hand gesture recognition. The lower level identified postures are converted into a sequence of terminal sequences according to the language rules. Given an input string, based on the likelihood related with each production rule, the equivalent gesture can be identified by looking for the creation rule which shows the highest possibility of generating it [8]. Principle Component Analysis (PCA) was used to decrease the dimension of the contributed vectors which, for neural network, are highly correlated. G. Caridakis et al., presents an automated sign language recognition system which has a center of attention on two research problems comprising of automation of recognition of sign language and a novel categorization plan integrating Markov chains, Self-Organizing maps and Hidden Markov Models [9]. J. N. Sawaya et al., presented a real time isolated and sentence American Sign Language (ASL) gestures recognition system. The main focus was on the preprocessing of images to separate hand movement and poses that permit rapid and correct detection. J.N. Sawaya et al., implements different techniques for Gesture recognition module. Continuously Adaptive Mean Shift algorithm is applied to track the white pixels for more exact calculation of the scope of the window. The image is judge against a set of formerly stored patterns and a resemblance metric between the template and the image is computed. Template Matching method describes image rotation as well as image translation. The gesture is recognized by the template with the highest matching metric. This is static gesture recognition. For dynamic gestures the window tracking united with the formerly detected gesture. The identified gesture is also used to boost the stored templates through considering the stored patterns and the identified gesture [10].

Discrete Wavelet Transform, the canny edge detector and Elliptical Fourier descriptors were used for feature extraction by P. V. V. Kishore and P. R. Kumar. The Fourier descriptors allow selecting a small set of numbers that illustrate a shape for an image frame. P. V. V. Kishore and P. R. Kumar also extracted features from segments of hand and head shapes including tracking information using gray co-occurrence matrix, in the form of hand locations from each video frame during another research study. The work of Using the moment invariant algorithm, descriptive features are extracted is done by S. Yacoob et al. The moment invariant properties are invariant to rotation, scale and translation [3]. For hand features a sculpt of hand consisting of the palm, the five fingers and finger tips has presented as a crude scale blob, edges at better-quality scales and even finer scale blobs [4]. The features are swiveled, translated and scaled according to the feature matrix when lining up models to images,. For “Appearance Based Recognition of ASL Using Gesture Segmentation”, during the feature extraction stage, every RGB image is resized and transformed to a gray range image. Afterwards an edge detection technique is applied; this is used to mark the positions at which the concentration alters piercingly. Sharp changes in properties of the image frequently reveal significant invents and alters in world assets. To identify “where” features are in the images, canny edge detector is used which results in a better edge detection as compared to the Sobel edge detector [6]. Q. Chen et al., describes that hand stance pattern can efficiently be predicted using Haar-Like Features with the addition of the “integral mage” [8]. Background Subtraction produces noise which was reduces using a Gaussian Filter and image Dilation/Erosion. The authors use a range of descriptors, both boundary-based (Curvature features, Fourier descriptors) and region-based (moments, moment-based features) as shape features [9]. Various techniques have been applied for gesture feature extraction by J.N. Sawaya et al. Histogram Equalization is implemented to grabbed frames in order to emphasize the gesture shape. Background pixels are filtered from the grabbed frames in order to extract an optimal skin color range [10]. Erosion and dilation are two morphological filters which were used to boost the image acquired after thresholding. To find contours of connected threshold pixels flood filling method is used and then the area encircled by the contours was filled. For better shaping of contour the image is then passed through a smoothing filter called a 2D Gaussian filter.

The proposed system of P. V. V. Kishore and P. R. Kumar presents that video preprocessing consists of resizing and filtration using a Gaussian low pass filter. The Fourier coefficients carry shape information which is not tactless to translation, alternation and scale modifications [1]. Active contour models or snakes have employed by P. V. V. Kishore and P. R. Kumar, which are capable of segmenting and tracking non rigid hand shapes and head actions for segmentation and tracking [2]. For the preprocessing stage S. Yacoob et al., applied the skin color detection algorithms on each of the image frame and right and left hands are segmented. The vertical maximum interleaving method is applied by comparing the pixel values column-by-column. Hand tracking and recognition phases of Arabic sign language system consist of three phases. Skin detection involves converting RGB captured frames into HSV color space and a smoothing median filter is used to remove noise and shadow. Canny edge detection is applied to mark boundaries as closely as possible to the actual boundaries to make best use of localization and spot edges only one time when a single edge exists for minimal reaction. Hand tracking is done to locate a moving hand and for each frame extracted, the contours of all the detected skin areas in binary image using connected component analysis are detected [4]. L. P. Vargas et al., system uses the fixed images of gray range with every pixel preset among 0 and 255 [5]. On this binarization, edge detection is applied. For the edge detection through the Laplacian operator, an algorithm of second derivative was used. After this the input image is accumulated in the memory. The proposed system of O.M. Fang et al., captures the hand using camera and converts it into gray scale with either white or black. The boundary of the item is figured out and then segmented which differs in dissimilarity to backdrop images. The pre-processing phase of the system involves hand detection using Slobel operator which calculates the gradient of the image. Linear structuring elements are used to dilate the image and then bitwise XOR operation enhanced the wanted hand region. The threshold is set in order to select which image will be captured inside the bounding box [7]. Canny Edge Detector outcomes in enhanced edge recognition as judge against Sobel edge detector anticipated by V.S. Kulkarni and Dr.S.D. Lokhande. Another technique applied by G. Caridakis et al., for the video processing stage is the segmentation of video frames is carried out through the use of Geodesic Active Region Model. The Geodesic Active Contour models are deformable two-dimensional contours specifically designed for segmentation process which result in strong and consistent hand detection and tracking. The segmentation process is also combined with skin tone and movement information to detect and extract hand silhouettes, while features taken out illustrate the hand trajectory, area and form [9].

Table 2 shows the techniques used by different researcher in different Sign Language Recognition Systems. Among the different research approaches, almost every author has implemented his or her work using various versions of MATLAB software. No work is done using visual studio C# environment, which is definitely a flaw. Different feature extraction techniques have been used which shows that computer vision and dip are very vast fields and any new technique used for future systems would be a success. Almost every researcher has calculated the recognition rate of training and testing data for the proposed system except G. Caridakis et al. The minimum recognition rate observed was 78% and the maximum was 96%.

Table 1: Techniques used in different Sign Language recognition systems

illustration not visible in this excerpt


This chapter presents the implementation techniques which are utilized to build a Sign Language Identification System. It comprises of a thorough explanation of the system and provides with the general idea of the required image/video handling algorithms and functions needed to run the application.

C#.NET Framework is used because it is capable of performing in congenial combinations and comprehensible edge to course the system. The OpenCV and EmguCV libraries that have been developed for image dealing are utilized to jot down the required functions. The fundamental design of these libraries are at real time computer vision, and at managing diverse image/video processes such as gesture recognition, motion tracking, object identification, etc.

A divide and conquer approach has been pursued to build the system where the complete system is partitioned into several components. To concurrently focus on small tasks and complete them in parallel, this approach is very useful. The subsequent step would be to accumulate them and investigate their functionality on the whole. The following block diagram shows the nonrepresentational version of processes involved in sign language recognition system.

illustration not visible in this excerpt

Figure 1: Block Diagram

The user loads the desired video and the system starts capturing the frames. After this, Preprocessing of video frames is done and the user’s hand is detected using skin detection and contour points. The images /video streams are next sent to the compare functions where they are matched and compared with stored images templates. After recognizing the signs, an image containing the identified alphabet will appear on the screen.

3.1. Data Acquisition

The benchmark videos database is acquired from the ROWTH-BOSTON website. The gesture database of RWTH holds 35 gestures with video file series for the numbers 1 to 5 and for the signs SCH, A to Z and the German umlauts Ä, Ö, Ü. [11]. Out of these video sequences only the videos for the signs A to Z is used for the proposed Sign Language Identification System.

3.2. Input Video

These videos are loaded into the system using the EmguCV library. Capture () function is used to play the videos. Timer () function slows down the videos; this function is used because the EmguCV library plays the videos superfast. The total number of frames and frame per second in the videos is also calculated. After this, each frame of the video stream is passed to the preprocessing class for further processing.

3.3. Hand Detection

For detecting hand in the video, the following three steps have been pursued:

- Skin Detection
- Video Processing
- Contours Extraction

3.3.1. Skin Detection

Hand detection is done using a skin detection algorithm. Skin color segmentation is the main step where a robust and correct skin color detection algorithm is essential. This is because the succeeding steps mainly depend on the feature of the segmented image. It is vital to select the suitable color space for the application at hand. YCbCr color space (Y luminance, cb and cr are the blue difference and red-difference chrominance components), which is an encoded nonlinear RGB signal, is used for skin modeling. Due to the computational benefit of the YCbCr color space, it has been used for skin segmentation. The procedure of skin detection is as follows:

The video frames acquired are converted to Ycbcr color space with minimum and maximum values. The data value of each frame is saved in a matrix. After applying loop to columns and rows of the matrix and conditional statement to compute values, the computed values are assigned to a new frame. Dilation and erosion is applied to this frame using specified Structuring Element. The resultant frame is returned for further processing and removing of unwanted and noisy areas.

3.3.2.Video Processing

The captured video frames are converted from Bgr to Gray frames for processing. Initially, each video frame intensity and contrast is customized with the EqualizeHist () function which stabilizes brightness and boosts contrast of the frame. The frames are then smoothed using PryUp () and PryDown (). PryDown () performs down-sampling step of Gaussian Pyramid Decomposition (GPD). It convolves the frame by means of the particular filter and next down-samples the frame by refusing even columns and rows. PryUp () carry out up sampling step of Gaussian Pyramid Decomposition (GPD). It un-samples the frame by discarding even columns and rows and after that convolves outcome with the particular filter multiplied by 4 for interpolation. Therefore the consequential frame is 4 times bigger than the source frame. For further noise filtering canny threshold is applied. Canny () locates the edge on the frame and marked them in the returned frame. The frames are then passed from a threshold filter which sieves out pixels amid at brightness to be lower than a definite threshold by means of built-in cvAdaptiveThreshold () function. Each frame is converted into a binary image using this function; the frames consist of front light pixels merely. By applying _Not (), complement of all the frames is computed.

3.3.3. Contour Extraction

A contour is a sequence of points that represent, in one manner or another, a curve in an image [13]. This demonstration can be different depending on the given situation. The function FindContours() computes contours of the skin detected region from binary frames of the video. It takes frames created by Canny (), which have boundary pixels in them and frames created by function AdaptiveThreshold(), in which the edges are hidden as boundaries between positive and negative regions. After this, the biggest contours are extracted and filtered from the found contours to get the boundary of the bright pixels. The contours points can be seen by using DrawContours().

3.4. Gesture Recognition Technique

For the purpose of sign language identification, following steps have been implemented:

- Feature Collection
- Shape Matching
- Hu Moments Comparison
- Recognition Results

3.4.1. Feature Collection

The xy coordinates of each contour point for every frame of the video is saved in an array. The convex hull of the hand contour is computed to understand the shape of hand and then convexity defects is computed. The shapes of many complex objects are well characterized by such defects [13]. ConvexityDefects() calculates the defects and in result gives a series of the defects. For evaluating two contours of different images, contour moments of hand area are computed. A moment is a crass feature of the shape figured out by summing or integrating over all of the pixels of the hand form. After this, the Hu moments of the contour moments are calculated. Linear groupings of the moments are called the Hu Invariant Moments. By merge the various hand contour moments, it is likely to generate invariant methods illustrating distinct aspects of the frame in a way that does not vary to rotation, scale and (for all but the one called h1) reflection [13]. The cvGetHuMoments() function computes the Hu moments from the central moments. The following is the actual definition of the 7 Hu invariant moments [13]:

illustration not visible in this excerpt

Figure 2: 7 Hu Invariant Moments

3.4.2. Shape Matching

The Shape matching algorithm is based on 2D pair-wise geometrical histogram for the hand contours. The function cvCalcPGH() determines minimum/ maximum distances of the contour edges and angle connecting the contour points. The 2D pair-wise geometrical histogram of both template and target video frames are calculated. After this, both the histograms are normalized and cvCompareHist() is used to compare two dense histograms. The resultant value is stored in a variable, the lower the value of comparison the higher the matching.

3.4.3.Hu Invariant Moments Comparison

The comparison of hand contour area of both the template and the target video frames is done. To compare two objects Hu moments is used and determine whether they are similar. The Similarity depends on the criterion which is provided. The type of CONTOURS MATCH returns a value, the lower the value the higher the possibility of getting the right answer.

3.4.4. Recognition Results

If the values of both the above variables are less than the provided threshold while matching/comparing target video frames and reference templates, the corresponding sign is displayed on the screen.


This section presents the Sign language Identification System Design. The Design of the System is concerned by means of how the system functionality is to be presented by the modules of the software system. Part of the system design development entails choosing which system competencies are to be employed in the software system [12]. Design defines the organization of the software which comprises of software components or modules, externally evident properties of the components and data structures. The coupling between software units in terms of data flow and control flow in sequence must be clear.

Design should demonstrate following properties:

- It should avoid complexity and should be easily understandable.
- Design of a system should be such that its pieces can be use in other systems.
- Should be able to form any shape.
- Design should explain functioning of system entirely and clearly.
- It should be easily open to change.
- It should cover up user requirements.

4.1. Proposed System Modeling language

The design of research and proposed system comprised of Use case diagram, Sequence Diagram and Flow Charts.

4.1.1. Use Case Diagram

UML illustrates software both behaviorally (dynamic view) and structurally (static view). It has a graphical notation to create visual models of the systems and permits extension with a profile (UML). The UML include elements Such as: Actors, Activities, Use Cases, Components and so on. The Use Case diagram is a behavioral UML diagram. It expresses the functionality offered by a system in terms of actors, their objectives characterize as use cases, and any reliance amongst these use cases. Use Case Diagram for Sign Language Identification System

illustration not visible in this excerpt

Figure 3: Use Case Diagram of Complete System Use Case 1

illustration not visible in this excerpt

Table 2 : Use Case 1 Use Case 2

illustration not visible in this excerpt

Table 3: Use Case 2 Use Case 3

illustration not visible in this excerpt

Table 4: Use Case 3 Use Case 4

illustration not visible in this excerpt

Table 5: Use Case 4 Use Case 5

illustration not visible in this excerpt

Table 6: Use Case 5 Use Case 6

illustration not visible in this excerpt

Table 7: Use Case 6 Use Case 7

illustration not visible in this excerpt

Table 8: Use Case 7


Excerpt out of 68 pages


Vision Based Sign Language Identification System Using Facet Analysis
Catalog Number
ISBN (eBook)
ISBN (Book)
File size
3678 KB
Contours, Skin Detection, Shape Matching, Gesture Recognition, Hu Moments Comparison, Sign Language Identification System
Quote paper
Faryal Amber (Author), 2013, Vision Based Sign Language Identification System Using Facet Analysis, Munich, GRIN Verlag,


  • No comments yet.
Read the ebook
Title: Vision Based Sign Language Identification System Using Facet Analysis

Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free