-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-OOM error raised by PyTorch during find_batch_size()
#286
Comments
I see. Thanks for reporting this @OriKovacsiKatz The reason why it failed is because PyTorch didn't raise OOM but raised plant-seg/plantseg/predictions/functional/array_predictor.py Lines 48 to 57 in 6c90c75
I do not understand why it raised something else. For now, yes, just change the list to have max I'll keep this issue open until we figure this out. |
find_batch_size()
"Legacy" tag because Napari GUI has workaround (single patch mode). |
Just formatted the issue for readability. |
Just in case I didn't sound encouraging, @OriKovacsiKatz you are very welcomed to check if OOM really happens in your device and then make a PR for PlantSeg and/or an issue for PyTorch. The easy way is just to stare at the terminal of PlantSeg and |
running plantseg example col-0_20161116
getting crash:
modified code to print details:
it crashed at batch_size=16
changed the sizes to maximal number 8 and it didn't crash
how can I fix the plant-seg/plantseg/predictions/functional/array_predictor.py line 51
so it will not crash the plantseg execution with all batch_sizes :
thanks
Ori
The text was updated successfully, but these errors were encountered: