Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update/Fix Pipeline Mixins and ORT Pipelines #2021

Open
wants to merge 66 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
fcb1690
created auto task mappings
IlyasMoutawwakil Jul 16, 2024
1cbb544
added correct auto classes
IlyasMoutawwakil Jul 18, 2024
cdba70e
created auto task mappings
IlyasMoutawwakil Jul 16, 2024
5bebbd5
added correct auto classes
IlyasMoutawwakil Jul 18, 2024
862e1a4
Merge branch 'auto-diffusion-pipeline' of https://github.com/huggingf…
IlyasMoutawwakil Jul 18, 2024
40b2ac0
added ort/auto diffusion classes
IlyasMoutawwakil Jul 19, 2024
29bfe57
fix ORTPipeline detection
IlyasMoutawwakil Jul 31, 2024
f6df38c
start test refactoring
IlyasMoutawwakil Jul 31, 2024
3123ea5
dynamic dtype
IlyasMoutawwakil Aug 27, 2024
7803ef3
support torch random numbers generator
IlyasMoutawwakil Aug 27, 2024
aa41f42
compact diffusion testing suite
IlyasMoutawwakil Aug 27, 2024
4837828
fix
IlyasMoutawwakil Aug 27, 2024
7504aa3
Merge branch 'main' into auto-diffusion-pipeline
IlyasMoutawwakil Sep 5, 2024
80532b3
test
IlyasMoutawwakil Sep 7, 2024
f99a058
test
IlyasMoutawwakil Sep 7, 2024
781ede7
test
IlyasMoutawwakil Sep 7, 2024
f0e3f2b
use latent-consistency architecture name instead of lcm
IlyasMoutawwakil Sep 7, 2024
80c63d0
fix
IlyasMoutawwakil Sep 7, 2024
a4518f2
add ort diffusion pipeline tests
IlyasMoutawwakil Sep 8, 2024
9f0c7b6
added dummy objects
IlyasMoutawwakil Sep 10, 2024
56d06d4
remove duplicate code
IlyasMoutawwakil Sep 10, 2024
7bfe4a5
update stable diffusion mixin
IlyasMoutawwakil Sep 10, 2024
27c29a8
update latent consistency
IlyasMoutawwakil Sep 10, 2024
a2e5423
update sd for img2img
IlyasMoutawwakil Sep 10, 2024
fdac134
update latent consistency
IlyasMoutawwakil Sep 10, 2024
5023cac
update model parts to use frozen dict
IlyasMoutawwakil Sep 10, 2024
2cd616e
update tests and utils
IlyasMoutawwakil Sep 10, 2024
dceccca
updated all mixins, enabled all tests ; all are passing except some r…
IlyasMoutawwakil Sep 11, 2024
b4e4f41
fix sd xl hidden states
IlyasMoutawwakil Sep 11, 2024
8e35c11
style
IlyasMoutawwakil Sep 11, 2024
475efdf
support testing without diffusers
IlyasMoutawwakil Sep 11, 2024
e2ad89a
remove unnecessary
IlyasMoutawwakil Sep 11, 2024
836a7e2
Merge branch 'auto-diffusion-pipeline' into update-diffusers-mixins
IlyasMoutawwakil Sep 11, 2024
7b4b5bd
revert
IlyasMoutawwakil Sep 11, 2024
390d65d
Merge branch 'main' into auto-diffusion-pipeline
IlyasMoutawwakil Sep 11, 2024
6f404dd
Merge branch 'auto-diffusion-pipeline' into update-diffusers-mixins
IlyasMoutawwakil Sep 11, 2024
7a8396c
export vae encoder by returning its latent distribution parameters
IlyasMoutawwakil Sep 12, 2024
8458705
fix the modeling to handle distributions
IlyasMoutawwakil Sep 12, 2024
76e7f01
create vae class to minimize changes in pipeline mixins
IlyasMoutawwakil Sep 12, 2024
24ee099
remove unnecessary tests
IlyasMoutawwakil Sep 12, 2024
83f2dcc
style
IlyasMoutawwakil Sep 12, 2024
036dc46
style
IlyasMoutawwakil Sep 12, 2024
c1a75c4
update diffusion models export test
IlyasMoutawwakil Sep 12, 2024
48f1329
style
IlyasMoutawwakil Sep 12, 2024
5a674f3
Merge branch 'auto-diffusion-pipeline' into update-diffusers-mixins
IlyasMoutawwakil Sep 12, 2024
5814a34
fall back for when block_out_channels is not in vae config
IlyasMoutawwakil Sep 12, 2024
afbb9af
remove model parts from optimum.onnxruntime
IlyasMoutawwakil Sep 12, 2024
53eedc6
added .to to model parts
IlyasMoutawwakil Sep 13, 2024
a706204
remove custom mixins
IlyasMoutawwakil Sep 13, 2024
002ed68
style
IlyasMoutawwakil Sep 13, 2024
70e4577
Update optimum/exporters/onnx/model_configs.py
IlyasMoutawwakil Sep 13, 2024
381977c
Update optimum/exporters/onnx/model_configs.py
IlyasMoutawwakil Sep 13, 2024
75585be
Merge branch 'auto-diffusion-pipeline' into update-diffusers-mixins
IlyasMoutawwakil Sep 13, 2024
5505a19
Merge branch 'update-diffusers-mixins' of https://github.com/huggingf…
IlyasMoutawwakil Sep 13, 2024
9de3f71
conversion to numpy always work
IlyasMoutawwakil Sep 14, 2024
b2274a1
test adding two new pipelines
IlyasMoutawwakil Sep 14, 2024
79cd9ac
remove duplicated tests
IlyasMoutawwakil Sep 16, 2024
0869f1c
Merge branch 'main' into update-diffusers-mixins
IlyasMoutawwakil Sep 16, 2024
b70b641
match diffusers numpy input
IlyasMoutawwakil Sep 16, 2024
7f77b1c
simplify model saving
IlyasMoutawwakil Sep 16, 2024
4933c7c
extend tests and only translate generators
IlyasMoutawwakil Sep 17, 2024
7d50df3
cleanup
IlyasMoutawwakil Sep 18, 2024
cce0ee8
reduce parent model usage in model parts
IlyasMoutawwakil Sep 18, 2024
86c2b7e
fix
IlyasMoutawwakil Sep 19, 2024
5a443ac
new tiny onnx diffusion model with configs
IlyasMoutawwakil Sep 19, 2024
15c94bc
model_save_path
IlyasMoutawwakil Sep 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions optimum/exporters/onnx/model_configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -1111,7 +1111,7 @@ def ordered_inputs(self, model) -> Dict[str, Dict[int, str]]:


class VaeEncoderOnnxConfig(VisionOnnxConfig):
ATOL_FOR_VALIDATION = 1e-2
ATOL_FOR_VALIDATION = 1e-4
# The ONNX export of a CLIPText architecture, an other Stable Diffusion component, needs the Trilu
# operator support, available since opset 14
DEFAULT_ONNX_OPSET = 14
Expand All @@ -1131,12 +1131,12 @@ def inputs(self) -> Dict[str, Dict[int, str]]:
@property
def outputs(self) -> Dict[str, Dict[int, str]]:
return {
"latent_sample": {0: "batch_size", 2: "height_latent", 3: "width_latent"},
"latent_parameters": {0: "batch_size", 2: "height_latent", 3: "width_latent"},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will result in a breaking change so would keep it if possible

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but it will be wrong because we don't sample the latent distribution anymore

}


class VaeDecoderOnnxConfig(VisionOnnxConfig):
ATOL_FOR_VALIDATION = 1e-3
ATOL_FOR_VALIDATION = 1e-4
# The ONNX export of a CLIPText architecture, an other Stable Diffusion component, needs the Trilu
# operator support, available since opset 14
DEFAULT_ONNX_OPSET = 14
Expand Down
24 changes: 3 additions & 21 deletions optimum/exporters/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,6 @@

from diffusers import (
DiffusionPipeline,
LatentConsistencyModelImg2ImgPipeline,
LatentConsistencyModelPipeline,
StableDiffusionImg2ImgPipeline,
StableDiffusionInpaintPipeline,
StableDiffusionPipeline,
StableDiffusionXLImg2ImgPipeline,
StableDiffusionXLInpaintPipeline,
StableDiffusionXLPipeline,
Expand Down Expand Up @@ -92,27 +87,13 @@ def _get_submodels_for_export_diffusion(
Returns the components of a Stable Diffusion model.
"""

is_stable_diffusion = isinstance(
pipeline, (StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionInpaintPipeline)
)
is_stable_diffusion_xl = isinstance(
pipeline, (StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline)
)
is_latent_consistency_model = isinstance(
pipeline, (LatentConsistencyModelPipeline, LatentConsistencyModelImg2ImgPipeline)
)

if is_stable_diffusion_xl:
projection_dim = pipeline.text_encoder_2.config.projection_dim
elif is_stable_diffusion:
projection_dim = pipeline.text_encoder.config.projection_dim
elif is_latent_consistency_model:
projection_dim = pipeline.text_encoder.config.projection_dim
else:
raise ValueError(
f"The export of a DiffusionPipeline model with the class name {pipeline.__class__.__name__} is currently not supported in Optimum. "
"Please open an issue or submit a PR to add the support."
)
projection_dim = pipeline.text_encoder.config.projection_dim

models_for_export = {}

Expand All @@ -139,7 +120,8 @@ def _get_submodels_for_export_diffusion(
vae_encoder = copy.deepcopy(pipeline.vae)
if not is_torch_greater_or_equal_than_2_1:
vae_encoder = override_diffusers_2_0_attn_processors(vae_encoder)
vae_encoder.forward = lambda sample: {"latent_sample": vae_encoder.encode(x=sample)["latent_dist"].sample()}
# we return the distribution parameters to be able to recreate it in the decoder
vae_encoder.forward = lambda sample: {"latent_parameters": vae_encoder.encode(x=sample)["latent_dist"].parameters}
models_for_export["vae_encoder"] = vae_encoder

# VAE Decoder https://github.com/huggingface/diffusers/blob/v0.11.1/src/diffusers/models/vae.py#L600
Expand Down
16 changes: 16 additions & 0 deletions optimum/onnx/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,22 @@ def _get_external_data_paths(src_paths: List[Path], dst_paths: List[Path]) -> Tu
return src_paths, dst_paths


def _get_model_external_data_paths(model_path: Path) -> List[Path]:
"""
Gets external data paths from the model.
"""

onnx_model = onnx.load(str(model_path), load_external_data=False)
model_tensors = _get_initializer_tensors(onnx_model)
# filter out tensors that are not external data
model_tensors_ext = [
ExternalDataInfo(tensor).location
for tensor in model_tensors
if tensor.HasField("data_location") and tensor.data_location == onnx.TensorProto.EXTERNAL
]
return [model_path.parent / tensor_name for tensor_name in model_tensors_ext]


def check_model_uses_external_data(model: onnx.ModelProto) -> bool:
"""
Checks if the model uses external data.
Expand Down
19 changes: 19 additions & 0 deletions optimum/onnxruntime/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,25 @@ def dtype(self):

return None

def to(self, *args, device: Optional[Union[torch.device, str, int]] = None, dtype: Optional[torch.dtype] = None):
for arg in args:
if isinstance(arg, torch.device):
device = arg
elif isinstance(arg, torch.dtype):
dtype = arg

if device is not None and device != self.device:
raise ValueError(
"Cannot change the device of a model part without changing the device of the parent model. "
"Please use the `to` method of the parent model to change the device."
)

if dtype is not None and dtype != self.dtype:
raise NotImplementedError(
f"Cannot change the dtype of the model from {self.dtype} to {dtype}. "
f"Please export the model with the desired dtype."
)

@abstractmethod
def forward(self, *args, **kwargs):
pass
Expand Down
Loading
Loading