You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 30, 2018. It is now read-only.
I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.
My Machine:
Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0
The text was updated successfully, but these errors were encountered:
I found the same issue. When I convert my model from pytorch to caffe2 using onnx-caffe2, caffe2 running time is about 30% slower than pytorch. I guess maybe the onnx auto-generated net structure is not well optimized, so it is more complex than the original pytorch model definition.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.
My Machine:
Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0
The text was updated successfully, but these errors were encountered: