Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running Deepstream and Tensorrt 10.x #563

Open
OnceUponATimeMathley opened this issue Aug 26, 2024 · 3 comments
Open

Error when running Deepstream and Tensorrt 10.x #563

OnceUponATimeMathley opened this issue Aug 26, 2024 · 3 comments

Comments

@OnceUponATimeMathley
Copy link

I want to update to tensorrt 10.x to make it support not auto case INT64 -> INT32
Screenshot 2024-08-25 at 15 06 24

But when upgrade to tensorrt 10.x, ONNX can not auto convert to engine
Screenshot 2024-08-26 at 14 17 05

I try export engine from onnx by trtexec in same env I build for running deepstream:
/usr/src/tensorrt/bin/trtexec --onnx=yolov8_warehouse_7_class.onnx --saveEngine=yolov8_warehouse_7_class.engine

But it error occurred
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngineEx::113] Error Code 4: Internal Error (Cannot deserialize engine with lean runtime since IRuntime::getEngineHostCodeAllowed() is false.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /ds_app/models/infer/engine/yolov8_warehouse_7_class.engine
ERROR: [TRT]: ModelImporter.cpp:949: While parsing node number 0 [Conv -> "/0/model.0/conv/Conv_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:950: --- Begin node ---
input: "input"
input: "0.model.0.conv.weight"
input: "0.model.0.conv.bias"
output: "/0/model.0/conv/Conv_output_0"
name: "/0/model.0/conv/Conv"
op_type: "Conv"
attribute {
name: "dilations"
ints: 1
ints: 1
type: INTS
}
attribute {
name: "group"
i: 1
type: INT
}
attribute {
name: "kernel_shape"
ints: 3
ints: 3
type: INTS
}
attribute {
name: "pads"
ints: 1
ints: 1
ints: 1
ints: 1
type: INTS
}
attribute {
name: "strides"
ints: 2
ints: 2
type: INTS
}

ERROR: [TRT]: ModelImporter.cpp:951: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:954: ERROR: onnxOpImporters.cpp:775 In function importConv:
[8] Assertion failed: (nbSpatialDims == kernelWeights.shape.nbDims - 2): The number of spatial dimensions and the kernel shape doesn't match up for the Conv operator. Number of spatial dimensions = 5, number of kernel dimensions = 4.

Could not parse the ONNX model

@marcoslucianops
Copy link
Owner

As far as I know, the DeepStream doesn't support other version of TensorRT. Only the versions it's compiled.

@OnceUponATimeMathley
Copy link
Author

OnceUponATimeMathley commented Aug 27, 2024

As far as I know, the DeepStream doesn't support other version of TensorRT. Only the versions it's compiled.

Can we support Tensorrt version 10.x, or as you said above: Only the versions it's compiled.
Deepstream can only run with tensorrt 8.x

Another question:
nvdsinfer_custom_impl_Yolo
make clean && make -j4

ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReaderInitCommon::46] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 236, Serialized Engine Version: 239)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1540 Deserialize engine failed from file: /ds_app/models/infer/engine/yolov8_warehouse_7_class.engine
ERROR: [TRT]: 3: [network.cpp::addInput::1695] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addInput::1695, condition: isValidDims(dims, hasImplicitBatchDimension())
)
ERROR: [TRT]: ModelImporter.cpp:954: ERROR: input:376 In function importInput:
[8] Assertion failed: *tensor && "Failed to add input to the network."

Could not parse the ONNX model

I see current version it 236, how can we update this version to 237, 239 or something with other tensorrt version
it depend on source NVIDIA release or not /opt/nvidia/deepstream/deepstream/sources/ ?

Thanks @marcoslucianops

I see version 7.0
DeepStream 7.0 on x86 platform:
Ubuntu 22.04
CUDA 12.2 Update 2
TensorRT 8.6 GA (8.6.1.6)
NVIDIA Driver 535 (>= 535.161.08)
NVIDIA DeepStream SDK 7.0
GStreamer 1.20.3
DeepStream-Yolo

@marcoslucianops
Copy link
Owner

The TRT 10.3 is the default for DeepStream 7.1. I will add the support this week. The TRT version for DeepStream depends on the NVIDIA release, so we should use exactly same CUDA/TRT they use for each DeepStream version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@marcoslucianops @OnceUponATimeMathley and others