Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QlinearGlobalAveragePool operator #2297

Merged
merged 8 commits into from
Oct 14, 2023
Merged

QlinearGlobalAveragePool operator #2297

merged 8 commits into from
Oct 14, 2023

Conversation

@lakhinderwalia lakhinderwalia linked an issue Oct 5, 2023 that may be closed by this pull request
@lakhinderwalia lakhinderwalia self-assigned this Oct 5, 2023
@codecov
Copy link

codecov bot commented Oct 5, 2023

Codecov Report

Merging #2297 (bbcc73b) into develop (271eedd) will decrease coverage by 0.03%.
The diff coverage is 78.57%.

❗ Current head bbcc73b differs from pull request most recent head 5c3607b. Consider uploading reports for the commit 5c3607b to get more accurate results

@@             Coverage Diff             @@
##           develop    #2297      +/-   ##
===========================================
- Coverage    91.33%   91.31%   -0.03%     
===========================================
  Files          434      435       +1     
  Lines        16262    16290      +28     
===========================================
+ Hits         14853    14875      +22     
- Misses        1409     1415       +6     
Files Coverage Δ
src/onnx/parse_pooling.cpp 97.75% <ø> (ø)
src/onnx/parse_qlinearglavgpool.cpp 78.57% <78.57%> (ø)

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Oct 6, 2023

Test Batch Rate new
7e06b6
Rate old
1a1c1b
Diff Compare
torchvision-resnet50 64 2,326.20 2,325.39 0.03%
torchvision-resnet50_fp16 64 5,352.56 5,347.82 0.09%
torchvision-densenet121 32 1,849.40 1,849.21 0.01%
torchvision-densenet121_fp16 32 3,412.17 3,406.11 0.18%
torchvision-inceptionv3 32 1,297.26 1,293.29 0.31%
torchvision-inceptionv3_fp16 32 2,534.25 2,537.00 -0.11%
cadene-inceptionv4 16 620.49 620.90 -0.06%
cadene-resnext64x4 16 589.20 588.26 0.16%
slim-mobilenet 64 7,214.58 7,213.68 0.01%
slim-nasnetalarge 64 236.53 236.58 -0.02%
slim-resnet50v2 64 2,557.86 2,557.33 0.02%
bert-mrpc-onnx 8 825.05 824.42 0.08%
bert-mrpc-tf 1 388.12 388.84 -0.19%
pytorch-examples-wlang-gru 1 294.51 298.85 -1.45%
pytorch-examples-wlang-lstm 1 306.40 314.28 -2.51%
torchvision-resnet50_1 1 552.51 549.39 0.57%
torchvision-inceptionv3_1 1 305.00 302.45 0.84%
cadene-dpn92_1 1 354.83 353.66 0.33%
cadene-resnext101_1 1 219.93 218.95 0.45%
slim-vgg16_1 1 224.14 224.19 -0.02%
slim-mobilenet_1 1 1,526.46 1,510.80 1.04%
slim-inceptionv4_1 1 218.14 216.72 0.65%
onnx-taau-downsample 1 307.03 306.03 0.33%
dlrm-criteoterabyte 1 21.72 21.69 0.17%
dlrm-criteoterabyte_fp16 1 40.75 40.72 0.06%
agentmodel 1 5,849.32 5,795.96 0.92%
unet_fp16 2 55.17 55.22 -0.08%
resnet50v1_fp16 1 748.99 761.81 -1.68%
bert_base_cased_fp16 64 971.44 971.55 -0.01%
bert_large_uncased_fp16 32 305.27 305.24 0.01%
bert_large_fp16 1 166.74 166.66 0.05%
distilgpt2_fp16 16 1,278.86 1,279.47 -0.05%

This build is OK for merge ✅

@migraphx-bot
Copy link
Collaborator


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-inceptionv3_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-dpn92_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-inceptionv4_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

    :white_check_mark:resnet50v1: PASSED: MIGraphX meets tolerance

🔴bert_base_cased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:bert_large: PASSED: MIGraphX meets tolerance

🔴distilgpt2_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


*/

struct parse_qlinearglobalaveragepool : op_parser<parse_qlinearglobalaveragepool>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should just name this parser as parse_qlinearpool so it can be used in the future for other types of pooling.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Paul, (agreed for the future steps..) but, for expediency, I suggest we do it for the next quantized pooling operator. Else, it would mean adding code for operators() etc., I believe, and I will have to verify the code again.. with no additional value add for now. Thanks.

@causten causten merged commit 9263d7a into develop Oct 14, 2023
@causten causten deleted the lw/q_linear_gl_avg_pool branch October 14, 2023 14:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support the ORT QLinearAdd and QLinearConv operators
6 participants