Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StableCascade image generation #1979

Conversation

aleksandr-mokrov
Copy link
Contributor

@aleksandr-mokrov aleksandr-mokrov commented Apr 29, 2024

CVS-139009

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@aleksandr-mokrov aleksandr-mokrov marked this pull request as ready for review April 30, 2024 09:40
@aleksandr-mokrov aleksandr-mokrov requested review from a team and apaniukov and removed request for a team April 30, 2024 09:41
Copy link

review-notebook-app bot commented Apr 30, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-04-30T11:28:37Z
----------------------------------------------------------------

"nncf>=2.10.0"


aleksandr-mokrov commented on 2024-05-01T11:01:18Z
----------------------------------------------------------------

Updated

Copy link

review-notebook-app bot commented Apr 30, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-04-30T11:28:37Z
----------------------------------------------------------------

Line #7.    prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", torch_dtype=torch.float32)

do we really need to load and run pt model every time? I think we can reduce running time and memory consumption if we skip it and will load pytorch model only if we need to convert model


aleksandr-mokrov commented on 2024-05-01T11:04:52Z
----------------------------------------------------------------

Loading doesn't affect, models are replaced one by one with a larger one. But inference increase memory consumption. Added a widget with the choice to perform inference from the original model or not, skipping inference by default.

Copy link

review-notebook-app bot commented Apr 30, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-04-30T11:28:38Z
----------------------------------------------------------------

not sure that it is suitable title as code below only select device without model compilation


aleksandr-mokrov commented on 2024-05-01T11:05:08Z
----------------------------------------------------------------

Changed

Copy link

review-notebook-app bot commented Apr 30, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-04-30T11:28:39Z
----------------------------------------------------------------

Line #2.        def __init__(self, prior_path):

could you please add oportunity to provide device as argument of init? (it gives opportunity to experiment with devices on user, e.g. offload pipepilne parts on different devices)


aleksandr-mokrov commented on 2024-05-01T11:05:15Z
----------------------------------------------------------------

Added

Copy link

review-notebook-app bot commented Apr 30, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-04-30T11:28:40Z
----------------------------------------------------------------

Line #10.            gr.Slider(2, 20, step=1, label="Prior guidance scale"),

could you please provide some explanation for demo parameters? what is prior guidance scale responsible for? how this value affect the result?


aleksandr-mokrov commented on 2024-05-01T11:07:34Z
----------------------------------------------------------------

Added extra information and the separate guidance scale for the decoder pipeline was added

@eaidova eaidova added the new notebook new jupyter notebook label May 1, 2024
Copy link
Contributor Author

Updated


View entire conversation on ReviewNB

Copy link
Contributor Author

Loading doesn't affect, models are replaced one by one with a larger one. But inference increase memory consumption. Added a widget with the choice to perform inference from the original model or not, skipping inference by default.


View entire conversation on ReviewNB

Copy link
Contributor Author

Changed


View entire conversation on ReviewNB

Copy link
Contributor Author

Added


View entire conversation on ReviewNB

Copy link
Contributor Author

Added extra information and the separate guidance scale for the decoder pipeline was added


View entire conversation on ReviewNB

Copy link

review-notebook-app bot commented May 1, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-05-01T12:55:09Z
----------------------------------------------------------------

please remove gradio cell output


aleksandr-mokrov commented on 2024-05-01T15:32:29Z
----------------------------------------------------------------

removed

Copy link
Contributor Author

removed


View entire conversation on ReviewNB

@eaidova eaidova merged commit b3ef8e9 into openvinotoolkit:latest May 1, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new notebook new jupyter notebook
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants