Replies: 3 comments 2 replies
-
It is not unloaded between API calls. Any nodes that do not exist with the same id and type between the execution of the first prompt and the second prompt are treated as deleted nodes. And if there are deleted nodes or nodes whose widget values have changed, the cache of all nodes affected by the results of those nodes is invalidated. If you want to maintain the checkpoint and controlnet models between two prompt executions, there are two approaches:
|
Beta Was this translation helpful? Give feedback.
-
Are the IDs in the API JSON meant to be unique between different API calls? When I export a workflow they are usually small integers, so there would be a large risk of collisions when I have two saved workflows from different Comfy runs. |
Beta Was this translation helpful? Give feedback.
-
My use case is that I have a few workflows with different prompts and Controlnet inputs that run frequently. If I understand using LoRA in Comfy correctly, changing LoRA will also need to load a new model to the GPU. Not sure if loading the model from CPU RAM to VRAM is the limiting factor though. Maybe having the model in disk cache is already quite good. I'll see what works and have a look at the Inspire nodes. Thanks for your help so far. |
Beta Was this translation helpful? Give feedback.
-
When running a workflow, particularly via the API, are the checkpoints and controlnets unloaded between two calls? If not, how many different checkpoints stay loaded and when are they unloaded? Does Comfy unload everything directly or is there some kind of garbage collection?
I'm looking for the most efficient way to automate comfyui workflows, and the node concept looks like it might start from zero each time, at least for the API calls. Also, I am not sure how checkpoint caching is handled in general, as they are not loaded in advance before running the workflow the first time.
Beta Was this translation helpful? Give feedback.
All reactions