-
On the first start, a script will download latest stable ComfyUI, ComfyUI-Manager and TAESD models. Since this is a slim image, large SD models are not in the download list.
-
The whole ComfyUI will be stored in a local folder (
./storage/ComfyUI
). -
If you already have a ComfyUI bundle, put it there and make an empty file (
./storage/.download-complete
) so the start script will skip downloading. -
Use ComfyUI-Manager (in ComfyUI web page) to update ComfyUI, manage custom nodes, and download models.
-
NVIDIA GPU with ≥6GB VRAM
-
Latest NVIDIA GPU driver
-
Either Game or Studio edition will work.
-
You don’t need to install drivers inside containers. Just make sure it’s working on your host OS.
-
-
Docker/Podman Installed
-
Linux user may need to install NVIDIA Container Toolkit (only on host OS). It will enable containers' GPU access.
-
Windows user could use Docker Desktop with WSL2 enabled, or Podman Desktop with WSL2 and GPU enabled.
-
WSL2 users please note that NTFS ←→ ext4 “translation” is very slow (down to <100MiB/s), so you probably want to use an in-WSL folder (or Docker volume) to save ComfyUI.
-
mkdir -p storage
docker run -it --rm \
--name comfyui-cu124 \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-e CLI_ARGS="--fast" \
yanwk/comfyui-boot:cu124-slim
mkdir -p storage
podman run -it --rm \
--name comfyui-cu124 \
--device nvidia.com/gpu=all \
--security-opt label=disable \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-e CLI_ARGS="--fast" \
docker.io/yanwk/comfyui-boot:cu124-slim
Once the app is loaded, visit http://localhost:8188/
The start script will create two example user script files at first start:
./storage/user-scripts/set-proxy.sh ./storage/user-scripts/pre-start.sh
The set-proxy.sh
is for setting up proxy, it will start before everything else.
The pre-start.sh
is for user operations, it will start just before ComfyUI starts.
-
.cache
folder is used to save model files downloaded by HuggingFace Hub and PyTorch Hub. They are not necessary "cache" so you may not want to delete them. -
You can perform a major update (e.g. to a new PyTorch version) by swapping the Docker image:
docker pull yanwk/comfyui-boot:cu124-slim
# remove the container if not using an ephemeral one
docker rm comfyui-cu124
# Then re-run above 'docker run' again
args | description |
---|---|
--lowvram |
If your GPU only has 4GB VRAM. |
--novram |
If adding --lowvram still out-of-memory. |
--cpu |
Run on CPU. It’s pretty slow. |
--use-pytorch-cross-attention |
Disable xFormers. Not recommend for video workflows or Linux hosts. |
--preview-method taesd |
Enable higher-quality previews with TAESD. ComfyUI-Manager would override this (settings available in Manager UI). |
--front-end-version Comfy-Org/ComfyUI_frontend@latest |
Use the most up-to-date frontend version. |
--fast |
Enable experimental optimizations. Currently the only optimization is float8_e4m3fn matrix multiplication on 4000/ADA series Nvidia cards or later. Might break things/lower quality. See the commit. |
More CLI_ARGS
available at
ComfyUI.
Variable | Example Value | Memo |
---|---|---|
HTTP_PROXY |
Set HTTP proxy. |
|
PIP_INDEX_URL |
Set mirror site for Python Package Index. |
|
HF_ENDPOINT |
Set mirror site for HuggingFace Hub. |
|
HF_TOKEN |
'hf_your_token' |
Set HuggingFace Access Token. More |
HF_HUB_ENABLE_HF_TRANSFER |
1 |
Enable HuggingFace Hub experimental high-speed file transfers. Only make sense if you have >1000Mbps and VERY STABLE connection (e.g. cloud server). More |
TORCH_CUDA_ARCH_LIST |
7.5 |
Build target for PyTorch and its extensions. For most users, no setup is needed as it will be automatically selected on Linux. When needed, you only need to set one build target just for your GPU. More |
CMAKE_ARGS |
'-DBUILD_opencv_world=ON -DWITH_CUDA=ON -DCUDA_FAST_MATH=ON -DWITH_CUBLAS=ON -DWITH_NVCUVID=ON' |
Build options for CMAKE projects using CUDA. |