Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build issues with modern libraries #49

Open
Anthony-Jacques opened this issue Mar 18, 2021 · 2 comments
Open

Build issues with modern libraries #49

Anthony-Jacques opened this issue Mar 18, 2021 · 2 comments

Comments

@Anthony-Jacques
Copy link

Is this repository still maintained / a recommended way to do things?

Having installed the latest Jetpack (4.5.1), and other contemporary packages, I notice that this code doesn't build as-is any more.

There are a couple of references to OpenCV's cv::imread() which fail due to needing cv::IMREAD_COLOR instead of CV_LOAD_IMAGE_COLOR, and the call to IUFFParser::registerInput fails with needing to specify the dims order (I guess nvuffparser::UffInputOrder::kNCHW).

I notice various other things are marked as deprecated (the use of DimsCHW for example), so probably should be updated to match the latest nv interfaces.

@Anthony-Jacques
Copy link
Author

I notice that the registerInput call is already fixed in a pending pull request: #40

There isn't an outstanding pull request for the opencv changes though.

As I've yet to make this "work" I'm not going to submit pull requests for my local fixes (especially as I've also hit further issues outside this repository related to tf1 vs tf2 API breakages in "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py" and elsewhere)

@Anthony-Jacques
Copy link
Author

I guess the answer to my initial question of "is this still the recommended way to do things?" is "no".

See https://forums.developer.nvidia.com/t/softmax-layer-in-tensorrt7-0-has-wrong-inference-results/112689/2

I got as far as hitting what appears to be a similar problem as in that link (my model successfully converted and ran, but returned incorrect results with a Softmax layer apparently not working the way I expected).

I've now switched to using tf2onnx and am able to load and run inference using TensorRT using that (the inference speedup is very significant relative to TF).

Leaving this ticket open as it seems to me that at some point someone from nvidia might want to mark this repository as deprecated / obsolete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant