The communication gap between the deaf and hearing population is clearly noticed. To make possible the communication between the Deaf and the hearing population and to overpass the gap in access to next generation Human Computer Interfaces, automated sign language analysis is highly crucial. Conversely, an enhanced solution is to build up a conversion system that translates a sign language gestures to text or speech. Exploration and experimentation of an efficient methodology based on facet features analysis. For a recognition system that can recognize gestures from video which can be used as a translation, A methodology has been proposed that extracts candidate hand gestures from sequence of video frames and collect hand features. The system has three separate parts namely: Hand Detection, Shape Matching and Hu moments comparison. The Hand Detection section detects hand through skin detection and by finding contours. It also includes processing of video frames. The procedure of shape matching is attained by comparing the histograms. The values of Hu moments of candidate hand region is identified using contour region analysis and compared to run matches and identify the particular sign language alphabet. Experimental analysis supports the efficiency of the proposed methodology on benchmark data.
Inhaltsverzeichnis (Table of Contents)
- Chapter 1: Introduction
- 1.1 Introduction
- 1.2 Problem Statement
- 1.3 Objectives
- 1.4 Scope of Work
- 1.5 Significance
- 1.6 Thesis Organization
- Chapter 2: Literature Review
- 2.1 Introduction
- 2.2 Overview of Sign Languages
- 2.3 Sign Language Recognition Techniques
- 2.3.1 Glove-Based Sign Language Recognition
- 2.3.2 Vision-Based Sign Language Recognition
- 2.4 Techniques Used for Feature Extraction
- 2.5 Comparative Studies
- 2.6 Conclusion
- Chapter 3: Proposed System
- 3.1 Introduction
- 3.2 System Design
- 3.3 System Flow
- 3.4 Hand Detection
- 3.5 Hand Shape Matching
- 3.6 Hu Moments
- 3.7 System Implementation
- 3.8 Evaluation
- 3.9 Conclusion
- Chapter 4: System Implementation
- 4.1 Introduction
- 4.2 Implementation of the Proposed System
- 4.3 Hand Detection and Segmentation
- 4.4 Hand Shape Matching
- 4.5 Hu Moment Calculation
- 4.6 Database of Sign Language Alphabet
- 4.7 Recognition System
- 4.8 Experimental Setup and Results
- 4.9 Conclusion
- Chapter 5: Conclusion and Future Work
- 5.1 Conclusion
- 5.2 Future Work
Zielsetzung und Themenschwerpunkte (Objectives and Key Themes)
This thesis explores the development of a vision-based system for identifying sign language gestures. The main objective is to bridge the communication gap between deaf and hearing individuals by translating sign language gestures into text or speech.- Automated sign language analysis for improved human-computer interaction.
- Development of a sign language recognition system using facet analysis for accurate gesture identification.
- Exploration of efficient methodologies for extracting and analyzing hand features from video sequences.
- Integration of techniques like skin detection, shape matching, and Hu moment analysis to enhance system performance.
- Evaluation of the proposed system's effectiveness using benchmark data.
Zusammenfassung der Kapitel (Chapter Summaries)
- Chapter 1: Introduction: This chapter introduces the problem of communication barriers faced by the deaf community and highlights the significance of automated sign language recognition systems. It outlines the research objectives, scope of work, and the organization of the thesis.
- Chapter 2: Literature Review: This chapter provides a comprehensive overview of existing sign language recognition techniques, focusing on glove-based and vision-based approaches. It discusses various feature extraction methods and presents a comparative analysis of existing systems.
- Chapter 3: Proposed System: This chapter presents the detailed design of the proposed vision-based sign language recognition system, including its architecture, system flow, and key components like hand detection, shape matching, and Hu moment analysis. It also discusses the system implementation strategy and evaluation methodology.
- Chapter 4: System Implementation: This chapter describes the practical implementation of the proposed system, covering aspects like hand detection and segmentation, shape matching algorithms, Hu moment calculation, and the creation of a sign language alphabet database. It also presents the experimental setup and analysis of the system's performance.
Schlüsselwörter (Keywords)
This work focuses on developing a vision-based sign language identification system using facet analysis. The primary keywords include: contours, skin detection, shape matching, gesture recognition, Hu moments comparison, and sign language identification system. This system aims to enhance communication accessibility by translating sign language gestures into understandable formats like text or speech.- Quote paper
- Faryal Amber (Author), 2013, Vision Based Sign Language Identification System Using Facet Analysis, Munich, GRIN Verlag, https://www.grin.com/document/276571