Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make argument constructor explicit #2346

Merged
merged 6 commits into from
Oct 19, 2023
Merged

Conversation

pfultz2
Copy link
Collaborator

@pfultz2 pfultz2 commented Oct 18, 2023

This implicit conversion was causing the perf issus in #1668, as the function was using argument but passing shape. So instead we should make the constructor explicit to avoid possible errors like this in the future.

Copy link
Member

@umangyadav umangyadav left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good.
There is also argument(shape, std::nullptr). i wonder if it needs to be marked as explicit or not.

Copy link
Collaborator

@CharlieL7 CharlieL7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks fine. What was the perf issue?

@TedThemistokleous TedThemistokleous added the bugfix Fixes a bug found in the code. label Oct 18, 2023
@pfultz2
Copy link
Collaborator Author

pfultz2 commented Oct 18, 2023

looks good.

There is also argument(shape, std::nullptr). i wonder if it needs to be marked as explicit or not.

You only need explicit on unary constructors. There is no implicit conversion for binary constructors.

@codecov
Copy link

codecov bot commented Oct 18, 2023

Codecov Report

Merging #2346 (73184b6) into develop (6e86734) will not change coverage.
Report is 1 commits behind head on develop.
The diff coverage is 100.00%.

❗ Current head 73184b6 differs from pull request most recent head c44c9f9. Consider uploading reports for the commit c44c9f9 to get more accurate results

@@           Coverage Diff            @@
##           develop    #2346   +/-   ##
========================================
  Coverage    91.36%   91.36%           
========================================
  Files          439      439           
  Lines        16493    16493           
========================================
  Hits         15069    15069           
  Misses        1424     1424           
Files Coverage Δ
src/include/migraphx/argument.hpp 78.57% <ø> (ø)
src/include/migraphx/op/allocate.hpp 100.00% <100.00%> (ø)
src/include/migraphx/op/pooling.hpp 94.92% <100.00%> (ø)

@umangyadav
Copy link
Member

What was the perf issue?

Curious on this ? why implicit conversion caused perf issue ?

@bpickrel
Copy link
Contributor

looks fine. What was the perf issue?

Of the test cases, most ran incrementally faster with this tuning but one or two regressed to taking 10-20x as long as before.

@@ -46,7 +46,7 @@ struct MIGRAPHX_EXPORT argument : raw_data<argument>
{
argument() = default;

argument(const shape& s);
explicit argument(const shape& s);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
explicit argument(const shape& s);
// explicit to protect against passing argument to a function that only needs a shape
explicit argument(const shape& s);

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Oct 18, 2023

Test Batch Rate new
9a1af4
Rate old
e8e142
Diff Compare
torchvision-resnet50 64 2,849.10 2,850.87 -0.06%
torchvision-resnet50_fp16 64 6,483.03 6,483.76 -0.01%
torchvision-densenet121 32 2,098.94 2,107.54 -0.41%
torchvision-densenet121_fp16 32 3,677.91 3,673.03 0.13%
torchvision-inceptionv3 32 1,594.86 1,594.82 0.00%
torchvision-inceptionv3_fp16 32 2,593.80 2,592.86 0.04%
cadene-inceptionv4 16 707.43 707.31 0.02%
cadene-resnext64x4 16 697.25 697.36 -0.02%
slim-mobilenet 64 8,338.94 8,344.55 -0.07%
slim-nasnetalarge 64 226.96 226.95 0.01%
slim-resnet50v2 64 2,671.59 2,676.49 -0.18%
bert-mrpc-onnx 8 825.47 824.50 0.12%
bert-mrpc-tf 1 387.49 388.61 -0.29%
pytorch-examples-wlang-gru 1 291.90 299.61 -2.57%
pytorch-examples-wlang-lstm 1 309.52 316.43 -2.18%
torchvision-resnet50_1 1 600.90 597.52 0.57%
torchvision-inceptionv3_1 1 334.72 340.06 -1.57%
cadene-dpn92_1 1 395.34 399.86 -1.13%
cadene-resnext101_1 1 329.68 329.54 0.04%
slim-vgg16_1 1 464.48 463.53 0.20%
slim-mobilenet_1 1 2,029.09 2,046.07 -0.83%
slim-inceptionv4_1 1 216.80 214.62 1.02%
onnx-taau-downsample 1 305.89 307.04 -0.37%
dlrm-criteoterabyte 1 21.68 21.71 -0.15%
dlrm-criteoterabyte_fp16 1 40.72 40.69 0.08%
agentmodel 1 5,782.89 5,783.52 -0.01%
unet_fp16 2 55.93 55.96 -0.05%
resnet50v1_fp16 1 924.47 948.45 -2.53%
bert_base_cased_fp16 64 970.00 970.32 -0.03%
bert_large_uncased_fp16 32 304.84 304.83 0.00%
bert_large_fp16 1 167.25 166.60 0.39%
distilgpt2_fp16 16 1,278.50 1,279.20 -0.06%

This build is OK for merge ✅

@migraphx-bot
Copy link
Collaborator


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

🔴torchvision-inceptionv3_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:cadene-dpn92_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

🔴slim-inceptionv4_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

    :white_check_mark:resnet50v1: PASSED: MIGraphX meets tolerance

🔴bert_base_cased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:bert_large: PASSED: MIGraphX meets tolerance

🔴distilgpt2_fp16: FAILED: MIGraphX is not within tolerance - check verbose output

@bpickrel
Copy link
Contributor

looks fine. What was the perf issue?

Of the test cases, most ran incrementally faster with this tuning but one or two regressed to taking 10-20x as long as before.

Paul said that when it implicitly converted function arguments from the shape type to the "argument" type, that caused it to make unnecessary memory allocations. I don't know the details of how much memory, or how many times it happened per inference, or why it only happened with a few models.

@causten causten merged commit 07848b2 into develop Oct 19, 2023
14 of 15 checks passed
@causten causten deleted the explicit-shape-constructor branch October 19, 2023 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix Fixes a bug found in the code.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants