Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running peoplenet with detectNet on jetPack6 #1882

Open
AkshatJain-TerraFirma opened this issue Aug 2, 2024 · 3 comments
Open

Running peoplenet with detectNet on jetPack6 #1882

AkshatJain-TerraFirma opened this issue Aug 2, 2024 · 3 comments

Comments

@AkshatJain-TerraFirma
Copy link

AkshatJain-TerraFirma commented Aug 2, 2024

Hello @dusty-nv

I downloaded peoplnet directly from : https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet. These are the contents of the downloaded folder :
labels.txt nvinfer_config.txt resnet34_peoplenet_int8.txt resnet34_peoplenet.onnx status.json

When i run the following script :

net = jetson_inference.detectNet(model="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx", labels="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt",
input_blob="input_0", output_cvg="scores", output_bbox="boxes",
threshold=0.8)

I get the following error :

[TRT] 4: [network.cpp::validate::3162] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load /home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx
[TRT] detectNet -- failed to initialize.
Traceback (most recent call last):
File "/home/akshat/terrafirma/v2/operator_station/vehicle_control/detect.py", line 12, in
net = jetson_inference.detectNet(model="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx", labels="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt",
Exception: jetson.inference -- detectNet failed to load network

Is this an issue with the parameters passed into detectNet method or does it need to be optimized to a .engine format? Do i manually have to run the tao converter on the .onnx file ?

(I am a complete beginner so sorry if these questions are silly)

@AkshatJain-TerraFirma AkshatJain-TerraFirma changed the title Running peoplenet with detect_net Running peoplenet with detectNet Aug 2, 2024
@AkshatJain-TerraFirma AkshatJain-TerraFirma changed the title Running peoplenet with detectNet Running peoplenet with detectNet on jetPack6 Aug 2, 2024
@AkshatJain-TerraFirma
Copy link
Author

AkshatJain-TerraFirma commented Aug 2, 2024

I converted it to .engine file and got the following error :

3: Cannot find binding of given name: input_0
[TRT] failed to find requested input layer input_0 in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to load /home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.engine
[TRT] detectNet -- failed to initialize.

Command i run to convert it to a .engine file :
/usr/src/tensorrt/bin/trtexec --onnx=/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx --saveEngine=resnet34_peoplenet.engine --fp16

@AkshatJain-TerraFirma
Copy link
Author

so i fixed that error by updating the input_0 and output bindings, but the detection does not work and I get this warning below in every frame and none of the labels are detected

[TRT]    The execute() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use executeV2() instead.
[TRT]    Also, the batchSize argument passed into this function has no effect on changing the input shapes. Please use setBindingDimensions() function to change input shapes instead.

@dusty-nv
Copy link
Owner

dusty-nv commented Aug 3, 2024

Hi @AkshatJain-TerraFirma, you may need to change this part of jetson-inference/c/detectNet.cpp

else if( IsModelType(MODEL_ONNX) )

It expects that detection ONNX models are made from the pytorch-ssd training scripts in the repo. Meanwhile the TAO peoplenet models normally fall under the MODEL_ENGINE category here:

else if( IsModelType(MODEL_ENGINE) )

So you may need to change that if you are using a different ONNX. For the TAO models, this script uses tao-converter to build the TRT engine, which then jetson-inference can load (but as mentioned in the other issue, had not tried that on JP6)

function tao_to_trt()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants