-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tiled VAE RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! #18
Comments
Do you have a workflow I can test? |
Just an ugly hack for this issue until it is fixed, add these lines to tiled_vae.py at line 325: Before:
After:
Just replace This isn't the greatest fix since tensors will be moved extra times and will cause a slight slowdown, but it should make the script usable in the meantime. |
+1 having this issue. I have 2 workflows, one uses sdxl the whole time, the other uses flux for the base image and sdxl for the post processing. When i use the non flux version it works fine, but the flux version fails after adding tiled diffusion to sdxl model and trying to decode the result. I've included my workflow, where i have both versions on the far right side |
Thank you for the workaround, i just tried it and will use this and made note of the modified source code until a fix releases. |
nvm, it got me past that ksampler but failed to encode later on after an upscale and a resize (which both worked) with below error:
|
See pkuliyi2015/multidiffusion-upscaler-for-automatic1111@d673f9c Also a possible fix for #18
I use IP-Adapter in my workflow. when running to the Tiled VAE node, it displays the
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
But when I replace it with the original VAE decoding node, the problem is solved.
The text was updated successfully, but these errors were encountered: