Bad latency on CPU #876
-
Hello, docTR is quite slow on my computer. It takes about 1.6 to 10 seconds to analyse one page, depending on how much text is on the page. The pages are scanned documents. I don't have a GPU. I also tested with a GPU (on a different machine). The times are better, but my goal is to achive good performance without a GPU. Is there anything I can di to make docTR faster? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 6 replies
-
Hi @tilman67 👋 , Would be great if you can provide some more information :)
Which model combination have you used ? |
Beta Was this translation helpful? Give feedback.
-
8Hi @tilman67 👋 Can you try the following combination if it's work better for you ? my cpu: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz Otherwise you can also try the following suggestions from #814 EDIT: tested with PyTorch / batch inference will also speed up the whole process (pass a list of images to .from_images([...]) / you can also play a bit with passing different batch sizes to the predictor maybe det_bs=4, rec_bs=256 EDIT2: with following it takes ~2.8sec on my machine (CPU):
and (GPU) ~ 0.8sec
|
Beta Was this translation helpful? Give feedback.
-
AttributeError: 'OCRPredictor' object has no attribute 'cuda' |
Beta Was this translation helpful? Give feedback.
8Hi @tilman67 👋
Can you try the following combination if it's work better for you ?
predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', reco_arch='crnn_mobilenet_v3_small', pretrained=True, assume_straight_pages=True)
my cpu: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
first page: 0.84 sec
sec page: 0.81 sec
Otherwise you can also try the following suggestions from #814
Let me know if there are further questions 🤗
EDIT: tested with PyTorch / batch inference will also speed up the whole process (pass a list of images to .from_images([...]) / you can also play a bit with passing different batch sizes to the predictor maybe det_bs=4, rec_bs=256
EDIT2:
with following it takes ~2.8s…