- stable-diffusion-webui from https://github.com/AUTOMATIC1111/stable-diffusion-webui (found under
/opt/stable-diffusion-webui
) - with TensorRT extension from https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt
- see the tutorial at the Jetson Generative AI Lab
This container has a default run command that will automatically start the webserver like this:
cd /opt/stable-diffusion-webui && python3 launch.py \
--data=/data/models/stable-diffusion \
--enable-insecure-extension-access \
--xformers \
--listen \
--port=7860
After starting the container, you can navigate your browser to http://$IP_ADDRESS:7860
(substitute the address or hostname of your device). The server will automatically download the default model (stable-diffusion-1.5
) during startup.
Other configuration arguments can be found at AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings
--medvram
(sacrifice some performance for low VRAM usage)--lowvram
(sacrafice a lot of speed for very low VRAM usage)
See the stable-diffusion
container to run image generation from a script (txt2img.py
) as opposed to the web UI.
Negative prompts: https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/7857
Stable Diffusion XL
- https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl
- https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
- https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
- https://stable-diffusion-art.com/sdxl-model/
CONTAINERS
stable-diffusion-webui |
|
---|---|
Builds | |
Requires | L4T ['>=34.1.0'] |
Dependencies | build-essential cuda cudnn python numpy cmake onnx pytorch:2.2 torchvision huggingface_hub rust transformers xformers pycuda opencv tensorrt onnxruntime |
Dependants | l4t-diffusion |
Dockerfile | Dockerfile |
Images | dustynv/stable-diffusion-webui:r35.2.1 (2024-02-02, 7.3GB) dustynv/stable-diffusion-webui:r35.3.1 (2024-02-02, 7.3GB) dustynv/stable-diffusion-webui:r35.4.1 (2024-02-02, 7.3GB) dustynv/stable-diffusion-webui:r36.2.0 (2024-02-02, 8.9GB) |
Notes | disabled on JetPack 4 |
CONTAINER IMAGES
Repository/Tag | Date | Arch | Size |
---|---|---|---|
dustynv/stable-diffusion-webui:r35.2.1 |
2024-02-02 |
arm64 |
7.3GB |
dustynv/stable-diffusion-webui:r35.3.1 |
2024-02-02 |
arm64 |
7.3GB |
dustynv/stable-diffusion-webui:r35.4.1 |
2024-02-02 |
arm64 |
7.3GB |
dustynv/stable-diffusion-webui:r36.2.0 |
2024-02-02 |
arm64 |
8.9GB |
Container images are compatible with other minor versions of JetPack/L4T:
• L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
• L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
RUN CONTAINER
To start the container, you can use jetson-containers run
and autotag
, or manually put together a docker run
command:
# automatically pull or build a compatible container image
jetson-containers run $(autotag stable-diffusion-webui)
# or explicitly specify one of the container images above
jetson-containers run dustynv/stable-diffusion-webui:r35.3.1
# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/stable-diffusion-webui:r35.3.1
jetson-containers run
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
To mount your own directories into the container, use the -v
or --volume
flags:
jetson-containers run -v /path/on/host:/path/in/container $(autotag stable-diffusion-webui)
To launch the container running a command, as opposed to an interactive shell:
jetson-containers run $(autotag stable-diffusion-webui) my_app --abc xyz
You can pass any options to it that you would to docker run
, and it'll print out the full command that it constructs before executing it.
BUILD CONTAINER
If you use autotag
as shown above, it'll ask to build the container for you if needed. To manually build it, first do the system setup, then run:
jetson-containers build stable-diffusion-webui
The dependencies from above will be built into the container, and it'll be tested during. Run it with --help
for build options.