Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Ldm3d #304

Merged
merged 19 commits into from
Aug 14, 2023
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
title: Run Inference
- local: tutorials/stable_diffusion
title: Stable Diffusion
- local: tutorials/stable_diffusion_ldm3d
title: LDM3D
title: Tutorials
- sections:
- local: usage_guides/overview
Expand Down
67 changes: 67 additions & 0 deletions docs/source/tutorials/stable_diffusion_ldm3d.mdx
fxmarty marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
<!---
Copyright 2022 The Intel authors Team and HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Text-to-(RGB, depth)

LDM3D was proposed in [LDM3D: Latent Diffusion Model for 3D](https://huggingface.co/papers/2305.10853) by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as [Stable Diffusion](./stable_diffusion/overview) which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps.
estelleafl marked this conversation as resolved.
Show resolved Hide resolved

The abstract from the paper is:

*This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at [this url](https://t.ly/tdi2).*


## How to generate RGB and depth images?

To generate RGB and depth images with Stable Diffusion LDM3D on Gaudi, you need to instantiate two instances:
- A pipeline with [`GaudiStableDiffusionLDM3DPipeline`]. This pipeline supports *text-to-(rgb, depth) generation*.
- A scheduler with [`GaudiDDIMScheduler`](https://huggingface.co/docs/optimum/habana/package_reference/stable_diffusion_pipeline#optimum.habana.diffusers.GaudiDDIMScheduler). This scheduler has been optimized for Gaudi.

When initializing the pipeline, you have to specify `use_habana=True` to deploy it on HPUs.
Furthermore, to get the fastest possible generations you should enable **HPU graphs** with `use_hpu_graphs=True`.
Finally, you will need to specify a [Gaudi configuration](https://huggingface.co/docs/optimum/habana/package_reference/gaudi_config) which can be downloaded from the Hugging Face Hub.

```python
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionLDM3DPipeline
from optimum.habana.utils import set_seed

model_name = "Intel/ldm3d-4c"

scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

set_seed(42)

pipeline = GaudiStableDiffusionLDM3DPipeline.from_pretrained(
model_name,
scheduler=scheduler,
use_habana=True,
use_hpu_graphs=True,
gaudi_config="Habana/stable-diffusion",
)
outputs = pipeline(
prompt=["High quality photo of an astronaut riding a horse in space"],
num_images_per_prompt=1,
batch_size=1,
output_type="pil",
num_inference_steps=40,
guidance_scale=5.0,
negative_prompt=None
)


rgb_image, depth_image = outputs.rgb, outputs.depth
rgb_image[0].save("astronaut_ldm3d_rgb.png")
depth_image[0].save("astronaut_ldm3d_depth.png")
```
29 changes: 29 additions & 0 deletions examples/stable-diffusion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,32 @@ python text_to_image_generation.py \
> There are two different checkpoints for Stable Diffusion 2:
> - use [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) for generating 768x768 images
> - use [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) for generating 512x512 images


### Latent Diffusion Model for 3D (LDM3D)

[LDM3D](https://arxiv.org/abs/2305.10853) generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts.

[Original checkpoint](https://huggingface.co/Intel/ldm3d) and [latest checkpoint](https://huggingface.co/Intel/ldm3d-4c) are open source.
A [demo](https://huggingface.co/spaces/Intel/ldm3d) is also available
estelleafl marked this conversation as resolved.
Show resolved Hide resolved

```python
python text_to_image_generation.py \
--ldm3d_model_name_or_path "Intel/ldm3d-4c" \
--prompts "An image of a squirrel in Picasso style" \
--num_images_per_prompt 10 \
--batch_size 2 \
--height 768 \
--width 768 \
--image_save_dir /tmp/stable_diffusion_images \
--use_habana \
--use_hpu_graphs \
--gaudi_config Habana/stable-diffusion-2
estelleafl marked this conversation as resolved.
Show resolved Hide resolved
--ldm3d
```

> There are three different checkpoints for LDM3D:
> - use [original checkpoint](https://huggingface.co/Intel/ldm3d) to generate outputs from the paper
> - use [the latest checkpoint](https://huggingface.co/Intel/ldm3d-4c) for generating improved results
> - use [the pano checkpoint](https://huggingface.co/Intel/ldm3d-pano) to generate panoramic view

28 changes: 25 additions & 3 deletions examples/stable-diffusion/text_to_image_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

import torch

from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline
from optimum.habana.diffusers import GaudiDDIMScheduler
from optimum.habana.utils import set_seed


Expand Down Expand Up @@ -121,9 +121,25 @@ def main():
),
)
parser.add_argument("--bf16", action="store_true", help="Whether to perform generation in bf16 precision.")
parser.add_argument(
"--ldm3d", action="store_true", help="Use LDM3D to generate an image and a depth map from a given text prompt."
)
parser.add_argument(
"--ldm3d_model_name_or_path",
default="Intel/ldm3d-4c",
type=str,
help="Path to pre-trained model",
)
regisss marked this conversation as resolved.
Show resolved Hide resolved

args = parser.parse_args()

if args.ldm3d:
from optimum.habana.diffusers import GaudiStableDiffusionLDM3DPipeline as GaudiStableDiffusionPipeline

args.model_name_or_path = args.ldm3d_model_name_or_path
else:
from optimum.habana.diffusers import GaudiStableDiffusionPipeline

# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
Expand Down Expand Up @@ -174,8 +190,14 @@ def main():
image_save_dir = Path(args.image_save_dir)
image_save_dir.mkdir(parents=True, exist_ok=True)
logger.info(f"Saving images in {image_save_dir.resolve()}...")
for i, image in enumerate(outputs.images):
image.save(image_save_dir / f"image_{i+1}.png")
if args.ldm3d:
for i, rgb in enumerate(outputs.rgb):
rgb.save(image_save_dir / f"rgb_{i+1}.png")
for i, depth in enumerate(outputs.depth):
depth.save(image_save_dir / f"depth_{i+1}.png")
else:
for i, image in enumerate(outputs.images):
image.save(image_save_dir / f"image_{i+1}.png")
else:
logger.warning("--output_type should be equal to 'pil' to save images in --image_save_dir.")

Expand Down
1 change: 1 addition & 0 deletions optimum/habana/diffusers/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .pipelines.pipeline_utils import GaudiDiffusionPipeline
from .pipelines.stable_diffusion.pipeline_stable_diffusion import GaudiStableDiffusionPipeline
from .pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d import GaudiStableDiffusionLDM3DPipeline
from .schedulers import GaudiDDIMScheduler
Loading
Loading