-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much GPU memory is required? #5
Comments
absl-py==1.0.0 My pip freeze |
If you run without clip_guidance and with batch_size=1 you can make it run with 8.5GB VRAM. The code is not made to run on CPU (it might be possible to adapt it though). You can keep your VRAM under 11GB with clip_guidance if your replace 'VIT-L/14" by "RN50" |
@limiteinductive thanks for saving me the time trying to test out all the other clip models. Any idea if it's easy to implement the sample code with the older VIT-B/32 model? Or will it require manually adjusting each instance of nn.Linear?
In addition, to optimize this, I find it feasible by adding the "--CPU" argument and adjusting modules.py under the "latent-diffusion\ldm\modules\encoders" directory to have "cpu" for any instance where it might otherwise call "cuda". |
I have an 11GB Rtx3080TI and it seems to be failling. On CPU, I get the error "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'". I hope I installed it correctly, I had to install some additional repos like transformers and taming-transformers. This is for the CLIP guidance
The text was updated successfully, but these errors were encountered: