Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Jetson] No OpKernel was registered to support Op 'TRTEngineOp' #326

Open
Pacifist-99 opened this issue Sep 28, 2022 · 6 comments
Open

[Jetson] No OpKernel was registered to support Op 'TRTEngineOp' #326

Pacifist-99 opened this issue Sep 28, 2022 · 6 comments

Comments

@Pacifist-99
Copy link

I have been getting this error while running inference on TFTRT converted models (centernet_hg104_512x512_coco17_tpu-8, yolov4-tiny TF converted model). on both models I got the same error
Platform : Jetson TX2
OS : ubuntu 20.01
python: 3.6.9
Tensorflow: 2.3.0

Can you please help to find a solution for this error

@DEKHTIARJonathan
Copy link
Collaborator

It looks like you are using a TensorFlow version that was compiled without TensorRT and TF-TRT support.
Where did you download tensorflow from ?

@DEKHTIARJonathan
Copy link
Collaborator

@MattConley CC

@DEKHTIARJonathan DEKHTIARJonathan changed the title tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'TRTEngineOp' used by {{node PartitionedCall/TRTEngineOp_0_2}} with these attrs: [output_shapes=[], workspace_size_bytes=522292960, max_cached_engines_count=1, segment_func=__inference_TRTEngineOp_0_2_native_segment_1517[], segment_funcdef_name="", use_calibration=false, fixed_input_size=true, input_shapes=[[?,416,416,3]], OutT=[DT_FLOAT], _allow_build_at_runtime=true, precision_mode="FP16", static_engine=false, serialized_segment="", cached_engine_batches=[], InT=[DT_FLOAT], calibration_data="", _use_implicit_batch=true] Registered devices: [CPU, XLA_CPU] Registered kernels: device='GPU' [[PartitionedCall/TRTEngineOp_0_2]] [Op:__inference_signature_wrapper_10677] [TF] No OpKernel was registered to support Op 'TRTEngineOp' Sep 28, 2022
@DEKHTIARJonathan DEKHTIARJonathan changed the title [TF] No OpKernel was registered to support Op 'TRTEngineOp' [Jetson] No OpKernel was registered to support Op 'TRTEngineOp' Sep 28, 2022
@Pacifist-99
Copy link
Author

It looks like you are using a TensorFlow version that was compiled without TensorRT and TF-TRT support. Where did you download tensorflow from ?

I am using TF version 2.3 which was installed using method in this link
https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==2.3.0+nv20.09

@Pacifist-99
Copy link
Author

It looks like you are using a TensorFlow version that was compiled without TensorRT and TF-TRT support. Where did you download tensorflow from ?

if you know Can you tell where should I download tensorflow 2.3.0 compiled with TRT

@Pacifist-99
Copy link
Author

It looks like you are using a TensorFlow version that was compiled without TensorRT and TF-TRT support. Where did you download tensorflow from ?

I have now compiled TensorRT with tensorflow but the issue percists
Is there any way to check if the compilation part done is correct or not?

@DEKHTIARJonathan
Copy link
Collaborator

@MattConley to help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants