Skip to content
This repository has been archived by the owner on Aug 30, 2018. It is now read-only.

onnx-caffe2 is slower? #152

Open
hyer opened this issue Jan 27, 2018 · 1 comment
Open

onnx-caffe2 is slower? #152

hyer opened this issue Jan 27, 2018 · 1 comment

Comments

@hyer
Copy link

hyer commented Jan 27, 2018

I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.

My Machine:

Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0

@nebulaf91
Copy link

I found the same issue. When I convert my model from pytorch to caffe2 using onnx-caffe2, caffe2 running time is about 30% slower than pytorch. I guess maybe the onnx auto-generated net structure is not well optimized, so it is more complex than the original pytorch model definition.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants