You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
require_full_compilation (bool): Require modules to be compiled end to end or return an error as opposed to returning a hybrid graph where operations that cannot be run in TensorRT are run in PyTorch
But when require_full_compilation=True it still generate hybrid graph
To Reproduce
Steps to reproduce the behavior:
run following code, we can see instead of error out, it still generate hybrid graph
The following nodes are currently set to run in Torch:
Node: torch.ops.aten._to_copy.default, with layer location: __/_to_copy
Node: torch.ops.aten.copy_.default, with layer location: copy__default
Note: Some of the above nodes may be supported, but were not included in a TRT graph by the partitioner
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
Torch-TensorRT 2.4.0
PyTorch Version (e.g. 1.0): 2.4.1
CPU Architecture: x86_64
OS (e.g., Linux): Almalinux
How you installed PyTorch (conda, pip, libtorch, source): pip
Python version: 3.11
CUDA version: 12.3
GPU models and configuration: RTX4k
The text was updated successfully, but these errors were encountered:
Bug Description
But when require_full_compilation=True it still generate hybrid graph
To Reproduce
Steps to reproduce the behavior:
run following code, we can see instead of error out, it still generate hybrid graph
Expected behavior
should warn:
Environment
conda
,pip
,libtorch
, source): pipThe text was updated successfully, but these errors were encountered: