-
when I train a text detection model using pretrained db_resnet50, I find it quite slow to calculate. it only use 1 thread in CPU, ( I have 24 thread in my CPU and 24GB in CUDA ). How do I speed up the calculation faster ? |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 15 replies
-
@felixdittrich92 can u help me ? thank u so much |
Beta Was this translation helpful? Give feedback.
-
@odulcy-mindee plz, can you take a look ? thank a lot :(( |
Beta Was this translation helpful? Give feedback.
-
Hi @anhalu 👋🏼, Excuse the late response 😅 Try to specify the workers by yourself with This has nothing to do with the models postprocessing it's more we apply by default a lot of augmentations which takes some time. doctr/references/detection/train_pytorch.py Line 269 in 8e0609d |
Beta Was this translation helpful? Give feedback.
-
@felixT2K when training, i see in htop, it only use 1 thread cpu to calculator, and it takes up 5gb gpu, I don't think it's a data loading problem, I think it has a problem with the loss calculation of base.py in detection forder. I see a lot of places using numpy in base.py, does that affect the calculation when training the model? and how do I speed up the training process |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
but .. your polygon annotations looks not correct
-> 'polygons': [[[x1, y1], [x2, y2], [x3, y3], [x4, y4]], ...]