Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About general ability #5

Open
lucasjinreal opened this issue Jul 7, 2023 · 7 comments
Open

About general ability #5

lucasjinreal opened this issue Jul 7, 2023 · 7 comments

Comments

@lucasjinreal
Copy link

image
image

@encounter1997
Copy link
Collaborator

Hi, thanks for your interest.

We want to clarify that this is an inference-only demo, that only supports our trained styles (denoted as 102, 103, 106, .etc, and the corresponding style images are presented in the examples). As can be seen, the selected style is None, which means this is an inference with the vanilla MUSE-Pytorch model, without our trained StyleDrop. We will update the gradio demo to better clarify its usage.

The gradio demo does not yet support training & inference with custom images, but you can still refer to the instructions here in our readme to train your custom StyleDrop weight.

Welcome further feedback on training your custom StyleDrop weights!

@lucasjinreal
Copy link
Author

@encounter1997 I see. When will a training support at least on my local machine?

@encounter1997
Copy link
Collaborator

Could you provide information about the specifications of your local machine? Our code has been tested on Ubuntu 22.04 with a GPU that has 24GB of memory.

@lucasjinreal
Copy link
Author

@encounter1997 Will 16GB mem work?

@2blackbar
Copy link

how long it trains on 3090 , whats the max resolution?

@zideliu
Copy link
Owner

zideliu commented Jul 8, 2023

how long it trains on 3090 , whats the max resolution?

If you set config.sample_interval = False here, 10 minutes enough, the max resolution is 256x256

@zideliu
Copy link
Owner

zideliu commented Jul 8, 2023

@encounter1997 Will 16GB mem work?

Not very clear, u can use fp16 clip by set prompt_model,_,_ = open_clip.create_model_and_transforms('ViT-bigG-14', 'laion2b_s39b_b160k',device='cpu') here. Have a try

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants