ASL translator using CNN. Currently, it has been trained for 10 letters only ['A', 'C', 'E', 'H', 'I', 'L', 'O', 'U', 'V', 'W']
It translates American Sign Language from live webcam to text and then to speech.
- Prajwol Lamichhane
- Pratik Rajbhandari
- [Abhay Raut]
- Bishal Sarangkoti
(Python 3.7 is not officially supported by tensorflow)
For Anaconda Users: You can download and import the virtual environment file to anoconda environment "tensorflow_env.yml" which install all the libraries neeeded for project.
Other users can install all requirements from "requirements.txt" file
pip install -r requirements.txt
Configuring paths to run the translator:
- Download pre-trained model from here
- Modify MODEL_PATH from variables.py
Running translator.py After installing all the requirements in your system environment or virtual environment run the translator directly Download model and set MODEL_PATH Usage:
- Translate from webcam
python translator.py
Controls:
Press n
to append current letter
Press m
for space
Press d
to delete last letter from sentence
Press s
to speak the translated sentence
Press c
to clear the sentence
Press ESC
key to exit
Configuring paths to run ASL.ipynb
- Download datasets from here or create your own
- Modify
TRAIN_DATA_PATH
andTEST_DATA_PATH
- Train the model
- Your model is saved as
withbgmodelv1.h5
- Use the model to run
translator.py
by configuring the path invariables.py