You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 30, 2018. It is now read-only.
This is a PyTorch export limitation. One idea we had for relaxing this restriction was this: you export the model twice with two different batch sizes. Then, looking at the dimensions which changed, you generalize your ONNX model accordingly. This code doesn't exist yet but it seems reasonably likely to work. One potential difficulty is that the symbolic batch size, while supported in ONNX spec, is not very well tested.
Sorry, I still don't really get it by "export the model twice with two different batch sizes". I need to create two different protos with different batch sizes? But that would result still in static batch size right? Do you mind to elaborate more?
Based on this tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html, we need to specify the batch_size while exporting the model from pytorch to onnx. In some cases we need a dynamic batch_size for inference, do you have any advice of how we can do this?
The text was updated successfully, but these errors were encountered: