Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] require_full_compilation=True has no effect #3246

Open
braindevices opened this issue Oct 17, 2024 · 0 comments
Open

🐛 [Bug] require_full_compilation=True has no effect #3246

braindevices opened this issue Oct 17, 2024 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@braindevices
Copy link

braindevices commented Oct 17, 2024

Bug Description

require_full_compilation (bool): Require modules to be compiled end to end or return an error as opposed to returning a hybrid graph where operations that cannot be run in TensorRT are run in PyTorch

But when require_full_compilation=True it still generate hybrid graph

To Reproduce

Steps to reproduce the behavior:

run following code, we can see instead of error out, it still generate hybrid graph

import torch
from torch import nn
class dummy_t(nn.Module):
    def __init__(self) -> None:
        super().__init__()
    def forward(self, x: torch.Tensor):
        return x.clamp_(0, 1).mul_(255).to(dtype=torch.uint8)
xs = [torch.randn((1,3,5,7)).cuda()]
exported = torch.export.export(
    dummy_t().cuda(),
    args=tuple(xs)
)
exported.module()(*xs)
import torch_tensorrt
trt_fx = torch_tensorrt.dynamo.compile(
    exported,
    assume_dynamic_shape_support=False,
    inputs=tuple(xs),
    use_python_runtime=False,
    enabled_precisions={torch.float32},
    use_fast_partitioner=False,
    # debug=True,
    min_block_size=1,
    require_full_compilation=True
)

for i, m in enumerate(trt_fx.modules()):
    print(i, m, hasattr(m, "engine"))

Expected behavior

should warn:

The following nodes are currently set to run in Torch:
Node: torch.ops.aten._to_copy.default, with layer location: __/_to_copy
Node: torch.ops.aten.copy_.default, with layer location: copy__default
Note: Some of the above nodes may be supported, but were not included in a TRT graph by the partitioner

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT 2.4.0
  • PyTorch Version (e.g. 1.0): 2.4.1
  • CPU Architecture: x86_64
  • OS (e.g., Linux): Almalinux
  • How you installed PyTorch (conda, pip, libtorch, source): pip
  • Python version: 3.11
  • CUDA version: 12.3
  • GPU models and configuration: RTX4k
@braindevices braindevices added the bug Something isn't working label Oct 17, 2024
@apbose apbose self-assigned this Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants