You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there,
I am trying to use your model to create captions for CT Scans which works fine for single layers (about 400 layers i.e. files per Scan in my case), but in order to get one caption for one volume, I am trying to pass all files of one scan into the model at once. The size of one volume is about 90 MB and I am running the model on an A100 80GB GPU. Unfortunately I get a CUDA OOM Error which tells me the script is trying to allocate additional 24GB. But where it gets interesting is the memory the script / model is trying to allocate with changing batch size:
Especially the extrem jump in between infering 120 files and 130 files keeps me quite clueless. Some additional info:
It doesnt matter whether I use dicom files or jpeg files (dicom files 10x as big as jpeg), memory usage is the same
It's not in the table, but infering with 400 files tries to allocate less memory than with 200 or even 130... (24 GB as mentioned above)
Any help is appreciated, I am running a bit out of ideas of where the issue could be, thanks!
The text was updated successfully, but these errors were encountered:
Hi there,
I am trying to use your model to create captions for CT Scans which works fine for single layers (about 400 layers i.e. files per Scan in my case), but in order to get one caption for one volume, I am trying to pass all files of one scan into the model at once. The size of one volume is about 90 MB and I am running the model on an A100 80GB GPU. Unfortunately I get a CUDA OOM Error which tells me the script is trying to allocate additional 24GB. But where it gets interesting is the memory the script / model is trying to allocate with changing batch size:
Especially the extrem jump in between infering 120 files and 130 files keeps me quite clueless. Some additional info:
Any help is appreciated, I am running a bit out of ideas of where the issue could be, thanks!
The text was updated successfully, but these errors were encountered: