Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dont insert reshapes when converting pooling to reduce #2149

Merged
merged 6 commits into from
Sep 27, 2023

Conversation

pfultz2
Copy link
Collaborator

@pfultz2 pfultz2 commented Sep 5, 2023

This should improve fusions.

@codecov
Copy link

codecov bot commented Sep 5, 2023

Codecov Report

Merging #2149 (e745b74) into develop (434a06c) will increase coverage by 0.00%.
Report is 3 commits behind head on develop.
The diff coverage is 100.00%.

❗ Current head e745b74 differs from pull request most recent head 7bc17a1. Consider uploading reports for the commit 7bc17a1 to get more accurate results

@@           Coverage Diff            @@
##           develop    #2149   +/-   ##
========================================
  Coverage    91.53%   91.53%           
========================================
  Files          429      429           
  Lines        16011    16009    -2     
========================================
- Hits         14655    14654    -1     
+ Misses        1356     1355    -1     
Files Coverage Δ
src/rewrite_pooling.cpp 95.00% <100.00%> (+2.40%) ⬆️
src/simplify_reshapes.cpp 98.73% <100.00%> (+0.01%) ⬆️

@umangyadav
Copy link
Member

umangyadav commented Sep 5, 2023

Performance tests have compilation errors, can you look into it ?

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Sep 14, 2023

Test Batch Rate new
ab57a3
Rate old
340012
Diff Compare
torchvision-resnet50 64 2,290.89 2,282.79 0.36%
torchvision-resnet50_fp16 64 5,399.61 5,367.78 0.59%
torchvision-densenet121 32 1,836.44 1,835.07 0.07%
torchvision-densenet121_fp16 32 3,397.62 3,391.29 0.19%
torchvision-inceptionv3 32 1,344.76 1,336.41 0.62%
torchvision-inceptionv3_fp16 32 2,589.94 2,588.08 0.07%
cadene-inceptionv4 16 678.47 678.73 -0.04%
cadene-resnext64x4 16 591.79 589.72 0.35%
slim-mobilenet 64 7,218.75 7,213.88 0.07%
slim-nasnetalarge 64 237.26 237.05 0.09%
slim-resnet50v2 64 2,530.05 2,528.69 0.05%
bert-mrpc-onnx 8 721.77 721.09 0.09%
bert-mrpc-tf 1 388.61 390.79 -0.56%
pytorch-examples-wlang-gru 1 303.06 310.08 -2.27%
pytorch-examples-wlang-lstm 1 306.24 310.72 -1.44%
torchvision-resnet50_1 1 559.45 556.07 0.61%
torchvision-inceptionv3_1 1 304.46 307.56 -1.01%
cadene-dpn92_1 1 351.82 352.44 -0.18%
cadene-resnext101_1 1 221.03 220.50 0.24%
slim-vgg16_1 1 225.07 224.86 0.09%
slim-mobilenet_1 1 1,465.66 1,483.04 -1.17%
slim-inceptionv4_1 1 220.06 221.72 -0.75%
onnx-taau-downsample 1 248.18 322.55 -23.06% 🔴
dlrm-criteoterabyte 1 21.72 21.68 0.17%
dlrm-criteoterabyte_fp16 1 40.64 40.62 0.06%
agentmodel 1 5,797.27 5,836.42 -0.67%
unet_fp16 2 55.10 55.08 0.04%

This build is not recommended to merge 🔴

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Sep 14, 2023


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

🔴torchvision-inceptionv3_1: FAILED: MIGraphX is not within tolerance - check verbose output


🔴cadene-dpn92_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

🔴slim-inceptionv4_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

@umangyadav
Copy link
Member

@pfultz2 accuracy tests are still failing during compile. Can you check ?

@pfultz2
Copy link
Collaborator Author

pfultz2 commented Sep 20, 2023

@pfultz2 accuracy tests are still failing during compile. Can you check ?

@umangyadav I pushed a fix.

instruction_ref pooling{};

std::vector<std::int64_t> axes(lens.size() - 2);
std::iota(axes.begin(), axes.end(), 2);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be doing any checks for the Layout ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pfultz2 sorry i missed this comment before approving. Would this require check on the layout ?

@@ -122,6 +122,11 @@ struct find_nop_reshapes
reshapes.insert("pad");
reshapes.insert("slice");
reshapes.insert("transpose");
reshapes.insert("reduce_mean");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be better to assert that axes are either empty resulting in no-op or
axes lengths are 1.

@umangyadav
Copy link
Member

This should improve fusions.

is it improving reduce fusion ? Which of the matcher is helping doing that ?

@causten causten requested review from TedThemistokleous and removed request for kahmed10 September 21, 2023 18:46
@TedThemistokleous TedThemistokleous added the enhancement New feature or request label Sep 21, 2023
@causten causten merged commit 7c8f6c2 into develop Sep 27, 2023
14 of 15 checks passed
@causten causten deleted the rewrite-pooling-no-reshape branch September 27, 2023 17:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants