-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: NVIDIA/TensorRT
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
RuntimeError: Failed to parse onnx
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4240
opened Nov 7, 2024 by
wahaha
State of affairs for NestedTensor (NJT) inference?
triaged
Issue has been triaged by maintainers
#4234
opened Nov 2, 2024 by
vadimkantorov
Given an engine file, how to know what GPU model it is generated on?
enhancement
New feature or request
triaged
Issue has been triaged by maintainers
#4233
opened Nov 1, 2024 by
yangdong02
How to remove signal and wait layer in the engine?
triaged
Issue has been triaged by maintainers
#4232
opened Oct 31, 2024 by
lijinghaooo
Tensor Parallel and Context Parallel
enhancement
New feature or request
triaged
Issue has been triaged by maintainers
#4231
opened Oct 31, 2024 by
algorithmconquer
Optimize Dynamic Shape Inference for TTS Model with HiFi-GAN Vocoder
triaged
Issue has been triaged by maintainers
#4230
opened Oct 30, 2024 by
UmerrAhsan
Global tensors with dynamic slice using python
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4229
opened Oct 30, 2024 by
zengrh3
How to support referenceNet+unet?
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4226
opened Oct 29, 2024 by
songh11
Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueV3::2666, condition: mContext.profileObliviousBindings.at(profileObliviousIndex) || getPtrOrNull(mOutputAllocators, profileObliviousIndex))
triaged
Issue has been triaged by maintainers
#4224
opened Oct 25, 2024 by
fgias
TensorRT 10.5 Flux-dev torch.float16 precision
Demo: Diffusion
Issues regarding demoDiffusion
Precision: FP16
triaged
Issue has been triaged by maintainers
#4223
opened Oct 25, 2024 by
algorithmconquer
bf16 convert failed
triaged
Issue has been triaged by maintainers
#4221
opened Oct 24, 2024 by
cillayue
BertQA sample throws segementation fault (TensorRT 10.3) when running GPU Jetson Orin Nano
triaged
Issue has been triaged by maintainers
#4220
opened Oct 23, 2024 by
krishnarajk
Can TensorRT calculate the number of Params and FLOPs for the model?
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4219
opened Oct 23, 2024 by
demuxin
AttributeError: 'tensorrt_bindings.tensorrt.ICudaEngine' object has no attribute 'num_bindings'
API: Python
triaged
Issue has been triaged by maintainers
#4216
opened Oct 21, 2024 by
metehanozdeniz
TensorRT 10.5 Flux Dit BF16 precision
Accuracy
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4215
opened Oct 21, 2024 by
QZH-eng
out of memory failure of TensorRT 10.5 when running flux dit on GPU L40S
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4214
opened Oct 21, 2024 by
QZH-eng
stable diffusion quantization in inpainting task is poor
Demo: Diffusion
Issues regarding demoDiffusion
Quantization: PTQ
triaged
Issue has been triaged by maintainers
#4212
opened Oct 20, 2024 by
worhar
How to strictly limiting the maximum GPU memory usage and clear GPU memory cache?
Memory Usage
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4211
opened Oct 19, 2024 by
EmmaThompson123
"Device to shape host node should not be folded into myelin" failure of TensorRT 10.5 when running trtexec on GPU L4
Export: torch.onnx
https://pytorch.org/docs/stable/onnx.html
internal-bug-tracked
triaged
Issue has been triaged by maintainers
#4210
opened Oct 18, 2024 by
sean-xiang-applovin
Different versions of TensorRT get different model inference results
Accuracy
triaged
Issue has been triaged by maintainers
#4209
opened Oct 18, 2024 by
demuxin
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.