You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I obtained the same inference time with my optimized model (sometimes slower the baseline model) using the Tensorflow TensorRT API.
I’ve included a set of two tests on both SSD Resnet640x640 and EfficientDetD0.
Hi @Abdellah-Laassairi. We are aware of some minor issues in object detection networks causing the TensorRT engines to fallback to tensorflow, giving 1:1 performance in TF-TRT and native TF.
We have resolved many of these issues in our 22.04 container which will be available later this month!
Description
I obtained the same inference time with my optimized model (sometimes slower the baseline model) using the Tensorflow TensorRT API.
I’ve included a set of two tests on both SSD Resnet640x640 and EfficientDetD0.
Environment
TensorRT Version: 8.2.2
GPU Type: Tesla T4
Nvidia Driver Version: 450.51.05
CUDA Version: 11.6
CUDNN Version: 7.0.0
Python Version: 3.8
TensorFlow Version: 2.7.0
Container : nvcr.io/nvidia/tensorflow:22.01-tf2-py3(build 31081301)
Relevant Files
Models obtained from Tensorflow Object Detection API Models Zoo
Steps To Reproduce
Github Repository containing all the notebooks with results and steps to reproduce
The text was updated successfully, but these errors were encountered: