This project seeks to create a platform capable of real-time interpretation of sign language gestures into text by leveraging advancements in computer vision, machine learning and natural language processing.
- Run
set_hand_histogram.py
- Run
create_gesture.py
- Run
rotate_image.py
- Run
display_gesture.py
- Run
load_images.py
- Run
cnn_train_model.py
- Run
final.py
Pull requests are welcome. For major changes, please open an issue first (if any) to discuss what you would like to change.