Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix MLIR input fusion non-std shapes from squeeze, flatten and unsqueeze #2313

Merged
merged 6 commits into from
Oct 12, 2023

Conversation

manupak
Copy link
Contributor

@manupak manupak commented Oct 11, 2023

Currently, we see MLIR partition candidates recieving non-standard shape due to not fusing in squeeze, flatten and unsqueeze ops. These ops could be canonicalized to reshape without introducing additional ops as long as MLIR backend is concerned.

Currently, we see MLIR partition candidates recieving
non-standard shape due to not fusing in squeeze, flatten
and unsqueeze ops. These ops could be canonicalized to reshape
without introducing additional ops as long as MLIR backend
is concerned.
@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

Merging #2313 (833d28b) into develop (50c5984) will not change coverage.
Report is 2 commits behind head on develop.
The diff coverage is n/a.

❗ Current head 833d28b differs from pull request most recent head dc81fe4. Consider uploading reports for the commit dc81fe4 to get more accurate results

@@           Coverage Diff            @@
##           develop    #2313   +/-   ##
========================================
  Coverage    91.45%   91.45%           
========================================
  Files          433      433           
  Lines        16174    16174           
========================================
  Hits         14792    14792           
  Misses        1382     1382           

@migraphx-bot
Copy link
Collaborator

Test Batch Rate new
be4658
Rate old
c58e7d
Diff Compare
torchvision-resnet50 64 2,323.67 2,324.91 -0.05%
torchvision-resnet50_fp16 64 5,351.30 5,358.20 -0.13%
torchvision-densenet121 32 1,834.15 1,847.11 -0.70%
torchvision-densenet121_fp16 32 3,402.93 3,418.55 -0.46%
torchvision-inceptionv3 32 1,297.28 1,294.11 0.24%
torchvision-inceptionv3_fp16 32 2,526.91 2,526.65 0.01%
cadene-inceptionv4 16 620.16 620.65 -0.08%
cadene-resnext64x4 16 589.13 588.39 0.13%
slim-mobilenet 64 7,208.27 7,222.03 -0.19%
slim-nasnetalarge 64 236.28 236.42 -0.06%
slim-resnet50v2 64 2,556.46 2,557.46 -0.04%
bert-mrpc-onnx 8 825.45 825.89 -0.05%
bert-mrpc-tf 1 389.89 388.50 0.36%
pytorch-examples-wlang-gru 1 301.46 295.28 2.09%
pytorch-examples-wlang-lstm 1 314.85 313.61 0.40%
torchvision-resnet50_1 1 551.31 549.68 0.30%
torchvision-inceptionv3_1 1 302.64 302.67 -0.01%
cadene-dpn92_1 1 351.18 353.79 -0.74%
cadene-resnext101_1 1 219.93 220.50 -0.26%
slim-vgg16_1 1 223.78 223.86 -0.04%
slim-mobilenet_1 1 1,517.03 1,473.02 2.99%
slim-inceptionv4_1 1 217.76 216.23 0.71%
onnx-taau-downsample 1 306.25 307.08 -0.27%
dlrm-criteoterabyte 1 21.68 21.67 0.02%
dlrm-criteoterabyte_fp16 1 40.71 40.73 -0.07%
agentmodel 1 5,833.18 5,882.66 -0.84%
unet_fp16 2 55.20 55.24 -0.08%
resnet50v1_fp16 1 769.76 756.89 1.70%
bert_base_cased_fp16 64 971.00 971.21 -0.02%
bert_large_uncased_fp16 32 305.02 305.13 -0.04%
bert_large_fp16 1 167.10 166.84 0.16%
distilgpt2_fp16 16 1,279.63 1,279.91 -0.02%

This build is OK for merge ✅

@migraphx-bot
Copy link
Collaborator


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-inceptionv3_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-dpn92_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-inceptionv4_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

    :white_check_mark:resnet50v1: PASSED: MIGraphX meets tolerance

🔴bert_base_cased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:bert_large: PASSED: MIGraphX meets tolerance

🔴distilgpt2_fp16: FAILED: MIGraphX is not within tolerance - check verbose output

Copy link
Contributor

@krzysz00 krzysz00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems fine to me, but I want to confirm that squeeze, unsqueeze, and flatten can all be replaced by reshape correctly like this

Also, can you add a test?

Copy link

@giuseros giuseros left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@manupak
Copy link
Contributor Author

manupak commented Oct 12, 2023

This seems fine to me, but I want to confirm that squeeze, unsqueeze, and flatten can all be replaced by reshape correctly like this

Also, can you add a test?

They are special cases of 'reshape' where as in reshape just work all that out by just knowing the output shape.

I ve added tests now.

Copy link
Collaborator

@pfultz2 pfultz2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future, it would be to make these transformation when we are generating MLIR.

@krzysz00
Copy link
Contributor

Yeah, but that'll be more possible once we have MIGraphX shapes in the MIGraphX dialect

@causten causten merged commit 1a1c1b4 into develop Oct 12, 2023
14 of 15 checks passed
@causten causten deleted the fix-mlir-reshape-like-ops-non-std-shape branch October 12, 2023 19:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants