Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify select_module after simplify_dyn_ops pass #2714

Merged
merged 13 commits into from
Feb 16, 2024

Conversation

CharlieL7
Copy link
Collaborator

  • This pass recalculates the output_dyn_shapes attribute for select_module operators after the rest of simplify_dyn_ops.
  • Recalculating the outputs shapes is required if the other matchers change the output shape of the submodules.
    • For example, the Resize operator can output {0, max_int} dynamic dimensions, but will be simplified for dynamic batch to a static shape.

@CharlieL7 CharlieL7 self-assigned this Feb 1, 2024
Copy link

codecov bot commented Feb 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (b30a447) 91.51% compared to head (affd0b8) 91.53%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #2714      +/-   ##
===========================================
+ Coverage    91.51%   91.53%   +0.02%     
===========================================
  Files          468      468              
  Lines        17599    17659      +60     
===========================================
+ Hits         16105    16165      +60     
  Misses        1494     1494              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@pfultz2
Copy link
Collaborator

pfultz2 commented Feb 1, 2024

We cant update compute_shape in select_module to compute the correct dynamic shape output?

@CharlieL7
Copy link
Collaborator Author

We cant update compute_shape in select_module to compute the correct dynamic shape output?

We could if we use shape.compute_shape(shape_inputs, module_inputs). Would have to change select_module to remove the output_dyn_shapes attribute and related changes to tests/IR.

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Feb 1, 2024

Test Batch Rate new
affd0b
Rate old
8e1705
Diff Compare
torchvision-resnet50 64 2,837.70 2,839.27 -0.06%
torchvision-resnet50_fp16 64 6,515.81 6,515.42 0.01%
torchvision-densenet121 32 2,103.59 2,085.14 0.88%
torchvision-densenet121_fp16 32 3,696.81 3,692.28 0.12%
torchvision-inceptionv3 32 1,599.93 1,599.62 0.02%
torchvision-inceptionv3_fp16 32 2,573.36 2,571.11 0.09%
cadene-inceptionv4 16 724.19 724.04 0.02%
cadene-resnext64x4 16 692.84 690.44 0.35%
slim-mobilenet 64 6,881.67 6,900.95 -0.28%
slim-nasnetalarge 64 177.05 177.02 0.01%
slim-resnet50v2 64 2,665.23 2,666.38 -0.04%
bert-mrpc-onnx 8 825.98 826.02 -0.01%
bert-mrpc-tf 1 380.92 381.39 -0.12%
pytorch-examples-wlang-gru 1 240.20 260.12 -7.66% 🔴
pytorch-examples-wlang-lstm 1 244.86 242.40 1.01%
torchvision-resnet50_1 1 608.65 612.15 -0.57%
cadene-dpn92_1 1 393.87 393.25 0.16%
cadene-resnext101_1 1 333.52 333.56 -0.01%
onnx-taau-downsample 1 306.17 306.50 -0.11%
dlrm-criteoterabyte 1 21.57 21.57 0.02%
dlrm-criteoterabyte_fp16 1 40.65 40.70 -0.11%
agentmodel 1 4,025.44 4,368.82 -7.86% 🔴
unet_fp16 2 55.96 55.88 0.14%
resnet50v1_fp16 1 859.56 879.94 -2.32%
resnet50v1_int8 1 800.89 804.54 -0.45%
bert_base_cased_fp16 64 936.35 936.58 -0.02%
bert_large_uncased_fp16 32 292.65 292.69 -0.02%
bert_large_fp16 1 184.15 184.27 -0.06%
distilgpt2_fp16 16 1,642.97 1,642.05 0.06%
yolov5s 1 490.69 495.36 -0.94%
tinyllama 1 32.64 32.66 -0.06%
vicuna-fastchat 1 157.25 159.46 -1.39%
whisper-tiny-encoder 1 335.86 335.70 0.05%
whisper-tiny-decoder 1 373.55 374.96 -0.38%

This build is not recommended to merge 🔴

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Feb 1, 2024


     ✅ bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

     ✅ bert-mrpc-tf: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

     ✅ torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-dpn92_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-resnext101_1: PASSED: MIGraphX meets tolerance

     ✅ dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

     ✅ agentmodel: PASSED: MIGraphX meets tolerance

     ✅ unet: PASSED: MIGraphX meets tolerance

     ✅ resnet50v1: PASSED: MIGraphX meets tolerance

     ✅ bert_base_cased_fp16: PASSED: MIGraphX meets tolerance

🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


     ✅ bert_large: PASSED: MIGraphX meets tolerance

     ✅ yolov5s: PASSED: MIGraphX meets tolerance

     ✅ tinyllama: PASSED: MIGraphX meets tolerance

     ✅ vicuna-fastchat: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-encoder: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-decoder: PASSED: MIGraphX meets tolerance

     ✅ distilgpt2_fp16: PASSED: MIGraphX meets tolerance

test/simplify_dyn_ops_test.cpp Show resolved Hide resolved
src/simplify_dyn_ops.cpp Outdated Show resolved Hide resolved
src/simplify_dyn_ops.cpp Outdated Show resolved Hide resolved
@CharlieL7
Copy link
Collaborator Author

Keeping this PR for now and pushing back the refactor to pass_manager for a later date. Tracking the pass_manager refactor as issue #2714 .

@CharlieL7 CharlieL7 requested a review from bpickrel February 9, 2024 17:26
@CharlieL7 CharlieL7 added simple small or simple changes and removed simple small or simple changes labels Feb 15, 2024
@CharlieL7 CharlieL7 requested a review from causten as a code owner February 16, 2024 18:20
@causten causten merged commit e1f75de into develop Feb 16, 2024
19 checks passed
@causten causten deleted the simplify_select_module branch February 16, 2024 22:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants