Skip to content

This project aims to develop a cutting-edge system that detects sign language from images and converts it into audio, and vice versa.

Notifications You must be signed in to change notification settings

insafhamdi/Sign-Language-Translator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Sign-Language-Translator

This project aims to develop a cutting-edge system that leverages Convolutional Neural Networks (CNN) to detect sign language from images and convert it into audio, and vice versa, thus facilitating seamless communication between deaf or hard-of-hearing individuals and those unfamiliar with sign language.

Project Overview

This project aims to bridge the communication gap between deaf or hard-of-hearing individuals and those unfamiliar with sign language. By leveraging the power of Convolutional Neural Networks (CNN), our system detects sign language from images and converts it into spoken audio. Conversely, it can also translate spoken language into sign language images or animations, making interactions more accessible and inclusive.

This system uses the WLASL (Word-Level American Sign Language) dataset as its foundation for training and validation, ensuring a broad and diverse range of signs can be accurately recognized and translated.

Features

  • Sign Language Detection: Utilizes advanced CNN models to detect and interpret sign language from images or live video feeds.
  • Audio Conversion: Converts detected sign language gestures into corresponding spoken language, making it understandable for non-sign language users.
  • Voice to Sign Language: Translates spoken language back into sign language, displayed through images or animations, facilitating two-way communication.
  • Accessibility Focused: Designed with accessibility in mind to support seamless communication for deaf or hard-of-hearing individuals.

Getting Started

Prerequisites

  • Python 3.8 or higher
  • TensorFlow 2.x
  • OpenCV
  • Other dependencies listed in requirements.txt

Technologies Used

Convolutional Neural Networks (CNN): For detecting and interpreting sign language from images. TensorFlow & Keras: For building and training the CNN models. OpenCV: For image and video processing.

Contributing

We welcome contributions from the community. If you wish to contribute to the project, please fork the repository and submit a pull request with your proposed changes.

Acknowledgments

The WLASL dataset creators, for providing a comprehensive dataset for training and testing. The deaf and hard-of-hearing community, for inspiring this project.

About

This project aims to develop a cutting-edge system that detects sign language from images and converts it into audio, and vice versa.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published