Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VIC Configuration failed image scale factor exceeds 16 #553

Open
a9010028 opened this issue Jul 18, 2024 · 2 comments
Open

VIC Configuration failed image scale factor exceeds 16 #553

a9010028 opened this issue Jul 18, 2024 · 2 comments

Comments

@a9010028
Copy link

a9010028 commented Jul 18, 2024

Hi Sir: I find an issue is about Jetson platform problem.

==================================================================
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4726: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
1:11:01.608351383 2018863 0xaaaafe3e21e0 WARN nvinfer gstnvinfer.cpp:1477:convert_batch_and_push_to_input_thread: error: NvBufSurfTransform failed with error -3 while converting buffer
1:11:01.608493595 2018863 0xaaaafe3e21e0 WARN nvinfer gstnvinfer.cpp:2420:gst_nvinfer_output_loop: error: Internal data stream error.
1:11:01.608521243 2018863 0xaaaafe3e21e0 WARN nvinfer gstnvinfer.cpp:2420:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
ERROR from element secondary-infer-engine1: NvBufSurfTransform failed with error -3 while converting buffer
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1477): convert_batch_and_push_to_input_thread (): /GstPipeline:ds-lpr-pipeline/GstNvInfer:secondary-infer-engine1

When i using the 416*416 , maybe make the output can't be alignment 16.

==================================================================
0:00:10.583964075 2018863 0xaaaafe9d78a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 2]: deserialized trt engine from :/home/aaeon/ds/weights/tlpr.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x416x416
1 OUTPUT kFLOAT boxes 3549x4
2 OUTPUT kFLOAT scores 3549x1
3 OUTPUT kFLOAT classes 3549x1

How can i fix this problem.
thank you

@nkinnaird
Copy link

The Jetson can't scale by a factor more than 16 when resizing frames for input into model. Either frames upstream need to be resized or model input size needs to be larger. Former can be done with some kind of capsfilter in a Gstreamer pipeline, like :

! video/x-raw(memory:NVMM),width=1280,height=720 !

@marcoslucianops
Copy link
Owner

You can set scaling-compute-hw=1 in the [property] key on the config_infer file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants