American Sign Language Detection Model
-
Updated
Sep 25, 2023 - Jupyter Notebook
American Sign Language Detection Model
Major Project in Final Year B.Tech (IT). Live Stream Sign Language Detection using Deep Learning.
This project aims to develop a cutting-edge system that detects sign language from images and converts it into audio, and vice versa.
A YOLOv5 model developed from scratch to convey the signs to a blind perosn and can generate the text out from the signs made by mute person. It is a prototype to showcase the possibility on developing a interpreter for mute and blind people.
Sign Language Detection : A Python project for detecting American Sign Language gestures using OpenCV and Mediapipe. The model recognizes signs from live web cam.
This project demonstrates hand sign detection using TensorFlow Lite and Flask.
In this project, we have created a model to detect sign language using mediapipe holistic keypoints and LSTM layered model.
This project seeks to create a robust platform capable of real-time interpretation of sign language gestures into text.
Add a description, image, and links to the sign-language-detection topic page so that developers can more easily learn about it.
To associate your repository with the sign-language-detection topic, visit your repo's landing page and select "manage topics."