Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is multiple prompt really necessary? #49

Open
ghost opened this issue Apr 8, 2023 · 3 comments
Open

Is multiple prompt really necessary? #49

ghost opened this issue Apr 8, 2023 · 3 comments
Labels
question Further information is requested

Comments

@ghost
Copy link

ghost commented Apr 8, 2023

I've tested several times, but I'm still not sure if using multiple prompts makes a significant difference in the results. It seems that the score model does a more important. Do you have any references or explanations comparing the impact of using multiple prompts versus just one?

@s1dlx
Copy link
Owner

s1dlx commented Apr 8, 2023

It’s up to you.

with one prompt you are optimizing the merge for that. With multiple ones you can try to generalize the merge so that it works on a variety of subjects

It’s the same principle of training a neural network on a variety of inputs rather than on few ones. This is in the view of avoiding over fitting your data

@s1dlx s1dlx added the question Further information is requested label Apr 8, 2023
@ghost
Copy link
Author

ghost commented Apr 9, 2023

Isn't the merging using 25 unet weights and training a neural network different?
Whether it was dog, cat, or food, it would eventually be excluded or evaluated by the score model, and I dont think multi prompt would have much of an impact.
I guess I'll have to do more experiments on this and create meaningful data.

@LoFiApostasy
Copy link

Basically what s1dlx said. I find it useful if Model A has no idea what (concept X) is, and you are trying to keep it intact. This works well if both models are of equal quality, but if (concept X) is on a lower fidelity model, you'll have to track down which blocks hold the concept and make a custom merge. So you have Model A (plus x blocks from Model B). Then the multiple prompts become rather important to weed out the blocks that produce mangled prompts for (concept X) from that merger.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants