You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've tested several times, but I'm still not sure if using multiple prompts makes a significant difference in the results. It seems that the score model does a more important. Do you have any references or explanations comparing the impact of using multiple prompts versus just one?
The text was updated successfully, but these errors were encountered:
with one prompt you are optimizing the merge for that. With multiple ones you can try to generalize the merge so that it works on a variety of subjects
It’s the same principle of training a neural network on a variety of inputs rather than on few ones. This is in the view of avoiding over fitting your data
Isn't the merging using 25 unet weights and training a neural network different?
Whether it was dog, cat, or food, it would eventually be excluded or evaluated by the score model, and I dont think multi prompt would have much of an impact.
I guess I'll have to do more experiments on this and create meaningful data.
Basically what s1dlx said. I find it useful if Model A has no idea what (concept X) is, and you are trying to keep it intact. This works well if both models are of equal quality, but if (concept X) is on a lower fidelity model, you'll have to track down which blocks hold the concept and make a custom merge. So you have Model A (plus x blocks from Model B). Then the multiple prompts become rather important to weed out the blocks that produce mangled prompts for (concept X) from that merger.
I've tested several times, but I'm still not sure if using multiple prompts makes a significant difference in the results. It seems that the score model does a more important. Do you have any references or explanations comparing the impact of using multiple prompts versus just one?
The text was updated successfully, but these errors were encountered: