Skip to content

hiepbk/Face-Recognition-MTCNN

Repository files navigation

1. Colect data:

Colect data and push it into your_face as bellow:

your_face/
├── ChiPu
│   ├── ChiPu_0001.jpg
│   ├── ChiPu_0002.jpg
│   ├── ...
│   ├── ChiPu_0014.jpg
│   └── ChiPu_0015.jpg
├── HienHo
│   ├── HienHo_0001.jpg
│   ├── HienHo_0002.jpg
│   ├── ...
│   ├── HienHo_0014.jpg
│   └── HienHo_0015.jpg
├── TrinhThao
│   ├── TrinhThao_0001.jpg
│   ├── TrinhThao_0002.jpg
│   ├── ...
│   ├── TrinhThao_0014.jpg
│   └── TrinhThao_0015.jpg
├── TrucAnh
│   ├── TrucAnh_00013.jpg
│   ├── TrucAnh_0001.jpg
│   ├── ...
│   ├── TrucAnh_0014.jpg
│   └── TrucAnh_0015.jpg
└── TruongGiang
    ├── TruongGiang_0001.jpg
    ├── TruongGiang_0002.jpg
    ├── ...
    ├── TruongGiang_0015.jpg
    └── TruongGiang_0016.jpg

2. Install requirements:

Install requirements pip install -r requirements.txt

3. Download pretrain model:

Download VGGFace2 and unzip into models, result as bellow:

models
├── 20180402-114759.pb
├── model-20180402-114759.ckpt-275.data-00000-of-00001
├── model-20180402-114759.ckpt-275.index
└── model-20180402-114759.meta

4. Training:

run train.py. In this file, we will run 2 module:

align_mtcnn('your_face', 'face_align')
train('face_align/', 'models/20180402-114759.pb', 'models/your_model.pkl')

5. Detection:

run detection.py

if __name__ == '__main__':
    run('models', 'models/your_model.pkl', video_file='demo.mp4', output_file='demo.avi')

video_file=None if you want to run internal camera.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages