You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the Empty Size Picker node makes a dim 4 latent, not a dim 16...
replace with code to offer all options?
example core SD3 is: latent = torch.ones([batch_size, 16, height // 8, width // 8], device=self.device) * 0.0609
which many people are using when they use that latent for Flux, but the correct would be latent = torch.ones([batch_size, 16, height // 8, width // 8], device=self.device) * 0.1159
(see ComfyUI repo: comfy/latent_formats.py)
Revisiting this.. the shift part (* value) is only because torch.ones are being used instead of torch.zeros.
I got that from Comfy's own code:
Asking in discord if that's correct, or more desirable, or a goof. Will follow up.
Followup: ComfyAnon confirmed, and is going to likely address that Flux is slightly different shift, but says it's not really an issue with Denoise=1.0 anyway.
The reason for the difference between pure zeros and using a latent that's all shifted is that the processing of these latents automatically compensates for the shift (on both sides of the process), and without having been shifted at all (ie zero), they end up minus-shifted incorrectly.
So the above is correct, OR you could add that shift factor to torch zeros, but torch ones multiplied down is likely cleaner to read.
The text was updated successfully, but these errors were encountered:
the Empty Size Picker node makes a dim 4 latent, not a dim 16...
replace with code to offer all options?
example core SD3 is:
latent = torch.ones([batch_size, 16, height // 8, width // 8], device=self.device) * 0.0609
which many people are using when they use that latent for Flux, but the correct would be
latent = torch.ones([batch_size, 16, height // 8, width // 8], device=self.device) * 0.1159
(see ComfyUI repo: comfy/latent_formats.py)
Revisiting this.. the shift part (* value) is only because torch.ones are being used instead of torch.zeros.
I got that from Comfy's own code:
Asking in discord if that's correct, or more desirable, or a goof. Will follow up.
Followup: ComfyAnon confirmed, and is going to likely address that Flux is slightly different shift, but says it's not really an issue with Denoise=1.0 anyway.
The reason for the difference between pure zeros and using a latent that's all shifted is that the processing of these latents automatically compensates for the shift (on both sides of the process), and without having been shifted at all (ie zero), they end up minus-shifted incorrectly.
So the above is correct, OR you could add that shift factor to torch zeros, but torch ones multiplied down is likely cleaner to read.
The text was updated successfully, but these errors were encountered: