Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I export models to ONNX? #592

Open
Fissium opened this issue Jun 11, 2024 · 2 comments · Fixed by #614
Open

Can I export models to ONNX? #592

Fissium opened this issue Jun 11, 2024 · 2 comments · Fixed by #614
Labels
documentation Improvements or additions to documentation

Comments

@Fissium
Copy link
Contributor

Fissium commented Jun 11, 2024

OML does not have built-in capabilities for exporting models to ONNX. However, PyTorch supports this natively.

You can export a model to ONNX using the following example:

import onnx
import onnx.checker
import torch
from oml.models import ViTExtractor, ViTUnicomExtractor


onnx_path = "vits16_dino.onnx"
model = ViTExtractor.from_pretrained("vits16_dino")
model = ViTUnicomExtractor.from_pretrained("vitb16_unicom",  use_gradiend_ckpt=False)  # this model requires turning off gradients ckpt
model.eval()

dummy_input = torch.randn(1, 3, 224, 224, requires_grad=True)

torch.onnx.export(
            model,
            dummy_input,
            onnx_path,
            export_params=True,
            opset_version=17,
            do_constant_folding=True,
            input_names=["images"],
            output_names=["output"],
            dynamic_axes={"images": {0: "batch_size"}, "output": {0: "batch_size"}},
        )

onnx_model = onnx.load(onnx_path)
onnx.checker.check_model(onnx_model)

Note: check the output of the converted model to ensure that the difference from the initial model's output is not too significant.

Below is a table showing the export support for various model architectures from the model ZOO. You can refer to this table to determine if a specific model can be exported to ONNX and if any errors are encountered.

Extractor Arch Export Support Error/Comments
ViTExtractor vits8 Yes None
ViTExtractor vits16 Yes None
ViTExtractor vitb8 Yes None
ViTExtractor vitb16 Yes None
ViTExtractor vits14 Yes None
ViTExtractor vitb14 Yes None
ViTExtractor vitl14 Yes None
ViTExtractor vits14_reg No Exporting the operator 'aten::_upsample_bicubic2d_aa' to ONNX opset version 17 is not supported
ViTExtractor vitb14_reg No Exporting the operator 'aten::_upsample_bicubic2d_aa' to ONNX opset version 17 is not supported
ViTExtractor vitl14_reg No Exporting the operator 'aten::_upsample_bicubic2d_aa' to ONNX opset version 17 is not supported
ViTUnicomExtractor vitb32_unicom Yes use_gradiend_ckpt=False in init
ViTUnicomExtractor vitb16_unicom Yes use_gradiend_ckpt=False in init
ViTUnicomExtractor vitl14_unicom Yes use_gradiend_ckpt=False in init
ViTUnicomExtractor vitl14_336px_unicom Yes use_gradiend_ckpt=False in init
ViTCLIPExtractor vitb16_224 Yes None
ViTCLIPExtractor vitb32_224 Yes None
ViTCLIPExtractor vitl14_224 Yes None
ViTCLIPExtractor vitl14_336 No The size of tensor a (577) must match the size of tensor b (257) at non-singleton dimension 1
ResnetExtractor resnet18 Yes None
ResnetExtractor resnet34 Yes None
ResnetExtractor resnet50 Yes None
ResnetExtractor resnet50_projector Yes None
ResnetExtractor resnet101 Yes None
ResnetExtractor resnet152 Yes None

Note:

  • There is a bug in ViTCLIPExtractor("vitl14_336"), causing an inference error.
  • ViTUnicomExtractor models are not exportable.
  • dinov2 models with registers are not yet supported.
@AlekseySh AlekseySh added the documentation Improvements or additions to documentation label Jun 11, 2024
@AlekseySh
Copy link
Contributor

Thank you for the detailed report, @Fissium!

@korotaS
Copy link
Contributor

korotaS commented Jul 4, 2024

Hi!
The errors with ViTUnicomExtractor are fixed in #614 and are soon to be merged. But for them to work, it is important to call model.eval() before exporting the model.

Also, I believe that the error with ViTCLIPExtractor is due to the input size: you used 224x224 input size for all models but ViTCLIPExtractor[vitl14_336] (and also ViTUnicomExtractor[vitl14_336px_unicom]) have 336x336 input size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants