You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am particularly interested in being able to generate images with dual guidance, ie: providing both an image prompt and a text prompt as guidance. The prior model is able to accept both as input and produces images that are clearly guided by both successfully. My question is about how to control and modulate this guidance. I would like a parameter to control the relative strength of both of these guidance systems. Does this exist in any capacity in the current implementation?
If not, how should I go about adding it?
The text was updated successfully, but these errors were encountered:
Hello!
I am particularly interested in being able to generate images with dual guidance, ie: providing both an image prompt and a text prompt as guidance. The prior model is able to accept both as input and produces images that are clearly guided by both successfully. My question is about how to control and modulate this guidance. I would like a parameter to control the relative strength of both of these guidance systems. Does this exist in any capacity in the current implementation?
If not, how should I go about adding it?
The text was updated successfully, but these errors were encountered: