Project from Applied Deep Learning 2020 - TUWien
The project consists in a static hand gesture recognizer that maps gestures to user-defined commands.
virtualenv venv
. Note: the program was tested with python 3.7, you can choose an interpreter using for examplevirtualenv venv -p=/usr/bin/python3.7
- Activate virtualenv:
. venv/bin/activate
pip install -r requirements.txt
- Download released model
- Copy model to: hand_classifier/models/model_final.hdf5
Or from the project's root directory run:
mkdir hand_classifier/models && cd hand_classifier/models && wget https://github.com/lucamoroz/StaticGestureRecognition/releases/download/0.9/model_final.hdf5
To run on the webcam:
python main.py
To run the application in debug mode (and see the video stream and prediction confidence) run python main.py --debug
.
There are multiple options available, to see all of them run python main.py --help
.
You can change the commands executed by modifying the file commands.json, which associates a command to each gesture.
The commands are passed to the underlying system and executed.
From the project root folder, run:
py.test
There is a dedicated python file that can be used to retrain the classification layer to fir your dataset.
- Collect your dataset. You can use the script
datset/data_script.py
to quickly add images to a datset, seedataset/README.md
for more info. - Change the labels of
hand_classifier/HandCNN.LABELS
andcommands.json
according to your dataset classes. - Retrain the classification layer of the pretrained model:
python hand_classifier/retrain_top.py --dataset [PATH_TO_DATASET] --model [PATH_TO_TRAINED_MODEL]