I wanted to use some larger models in my local ComfyUI workflows but I'm too gpu poor. Currently model support is limited but I'll add more when I find the time.
- Flux t2i (fal, replicate, runware)
- Flux i2i (fal)
- Flux w/loras and i2i (fal)
- Auraflow t2i (fal)
- SoteDiffusion t2i (fal)
- StableCascade t2i (fal)
- LLaVA 1.5 13B i2t (fal)
- LLaVA 1.6 34B i2t (fal)
- SDv1 t2i w/loras (runware)
- SDXL t2i w/loras (runware)
- Install extension and dependencies (preferably with comfy manager)
- Place your api key/token in a text file in ComfyUI-Cloud-APIs/keys
- Install the extension with the "Install via Git URL" option in comfyUI manager
- Create an account at fal.ai
- Go to https://fal.ai/dashboard/keys and click "Add key"
- Name the key then copy it into a text file in the ComfyUI-Cloud-APIs/keys folder. (there is a placeholder nokey.txt file which you can delete)
- Consult the models page to get an idea of how much each generation will cost
- Go to https://fal.ai/dashboard/billing and top up your account. For your financial well-being I recommend against automated topups, but I can't stop you.
- Install the extension with the "Install via Git URL" option in comfyUI manager
- Create an account at https://replicate.com/
- Go to https://replicate.com/account/api-tokens and copy your token (or create a new one)
- Copy the token into a text file in the ComfyUI-Cloud-APIs/keys folder. (there is a placeholder nokey.txt file which you can delete)
- Consult https://replicate.com/explore to get an idea of how much each generation will cost
- Go to https://replicate.com/account/billing to setup billing when you run out of free usage.