- modify
train_config.py
andeval_config.py
:- change
backbone_type
you want use [se-resnext50
,vgg
,resnet50
,resnext50
,resnext101
] you can add your backbone inbackbones
, or add torchvision supported model by modifyutils/model_utils.py
- train_datasets_bpath =
'data/to/your/path'
and same to test_datasets_bpath - change dataLoader_util to you want use(cv2 by default) [
cv2
/PIL
/jpeg4py
(jpeg4py need compile )] - change
batch_size
andgpu_ids
to you want use - if you wanna use FP16 Mixed Precision, apex requirement, then change
fp16_using
= True - if train with center loss, change
additive_loss_type
='CenterLoss'
- if train with COCO loss, change
additive_loss_type
='COCOLoss'
- if train with center loss and COCO loss, change
additive_loss_type
='COCOLoss&CenterLoss'
- if train only use softmax, change
additive_loss_type
=None
or''
- if train with focal loss, change
use_focal_loss
=True
- change
- run
nohup python -m visdom.server &
on linux shell then gotolocalhost:8097
to see your model visual output - then run
python train.py
to trainor runfast_train.py
to train withDALI
that3x~20x
(still debuging only support single GPU now) faster thanpytorch dataloader
DALI
speed up training supportTriplet Model
, if you have 2 GPU card, here's example:python -m torch.distributed.launch --nproc_per_node=2 fast_triplet_train.py
change--nproc_per_node=n
if you have n GPU card.
- modify
triplet_train_config.py
referenceSiamese-Network
- run
python triplet_train.py
- support multi-GPU DALI speedup