Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Squeeze dyn dim allow compatible #3035

Merged
merged 4 commits into from
May 10, 2024
Merged

Conversation

CharlieL7
Copy link
Collaborator

@CharlieL7 CharlieL7 commented May 3, 2024

Allows squeeze to work on an axis with dynamic_dimension that intersects with {1, 1}.
Needed to work with fully unknown dimensions {0, max_int}.

@CharlieL7 CharlieL7 self-assigned this May 3, 2024
@migraphx-bot
Copy link
Collaborator

migraphx-bot commented May 3, 2024

Test Batch Rate new
f327b0
Rate old
bc6c79
Diff Compare
torchvision-resnet50 64 2,958.51 2,950.97 0.26%
torchvision-resnet50_fp16 64 6,556.68 6,566.23 -0.15%
torchvision-densenet121 32 2,422.54 2,421.23 0.05%
torchvision-densenet121_fp16 32 3,959.92 3,992.03 -0.80%
torchvision-inceptionv3 32 1,658.40 1,659.26 -0.05%
torchvision-inceptionv3_fp16 32 2,597.23 2,598.65 -0.05%
cadene-inceptionv4 16 776.62 776.84 -0.03%
cadene-resnext64x4 16 740.46 740.86 -0.05%
slim-mobilenet 64 6,916.59 6,918.90 -0.03%
slim-nasnetalarge 64 177.12 177.18 -0.03%
slim-resnet50v2 64 2,875.51 2,876.68 -0.04%
bert-mrpc-onnx 8 1,063.27 1,064.43 -0.11%
bert-mrpc-tf 1 467.63 511.61 -8.60% 🔴
pytorch-examples-wlang-gru 1 428.73 371.38 15.44% 🔆
pytorch-examples-wlang-lstm 1 549.72 409.57 34.22% 🔆
torchvision-resnet50_1 1 776.83 790.15 -1.69%
cadene-dpn92_1 1 434.77 394.00 10.35% 🔆
cadene-resnext101_1 1 366.66 362.65 1.10%
onnx-taau-downsample 1 348.78 349.17 -0.11%
dlrm-criteoterabyte 1 33.43 33.45 -0.07%
dlrm-criteoterabyte_fp16 1 56.65 56.69 -0.08%
agentmodel 1 7,538.98 7,460.35 1.05%
unet_fp16 2 57.44 57.31 0.22%
resnet50v1_fp16 1 968.60 869.63 11.38% 🔆
resnet50v1_int8 1 833.17 823.61 1.16%
bert_base_cased_fp16 64 1,012.09 1,012.93 -0.08%
bert_large_uncased_fp16 32 316.58 316.62 -0.01%
bert_large_fp16 1 nan nan nan%
distilgpt2_fp16 16 1,993.11 1,994.48 -0.07%
yolov5s 1 506.34 501.26 1.01%
tinyllama 1 44.97 44.99 -0.05%
vicuna-fastchat 1 175.15 178.18 -1.70%
whisper-tiny-encoder 1 404.87 404.60 0.07%
whisper-tiny-decoder 1 427.98 424.85 0.74%

This build is not recommended to merge 🔴

@migraphx-bot
Copy link
Collaborator

migraphx-bot commented May 3, 2024


❌bert-mrpc-onnx: ERROR - check error outputTraceback (most recent call last):
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in
main()
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 205, in main
model = migraphx.parse_onnx(model_name, default_dim_value=batch)
RuntimeError: /src/AMDMIGraphX/src/onnx/onnx_parser.cpp:264: parse_from: PARSE_FROM: Failed reading onnx file: /new-saved-models/huggingface-transformers/bert_mrpc1.onnx


     ✅ bert-mrpc-tf: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

     ✅ torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-dpn92_1: PASSED: MIGraphX meets tolerance

❌cadene-resnext101_1: ERROR - check error output2024-05-09 14:39:25.652978106 [W:onnxruntime:, model.cc:183 Model] ONNX Runtime only guarantees support for models stamped with opset version 7 or above for opset domain 'ai.onnx'. Please upgrade your model to opset 7 or higher. For now, this opset 6 model may run depending upon legacy support of some older opset version operators.
2024-05-09 14:39:25.658940439 [W:onnxruntime:, transpose_optimizer.cc:28 ApplyImpl] Transpose optimizer failed: Unsupported ONNX opset: 6
Traceback (most recent call last):
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in
main()
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 267, in main
sess = ort.InferenceSession(model_name,
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 463, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for BatchNormalization(6) node with name ''


     ✅ dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

     ✅ agentmodel: PASSED: MIGraphX meets tolerance

❌unet: ERROR - check error outputTraceback (most recent call last):
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in
main()
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 207, in main
model = migraphx.parse_onnx(model_name,
RuntimeError: /src/AMDMIGraphX/src/onnx/onnx_parser.cpp:264: parse_from: PARSE_FROM: Failed reading onnx file: /new-saved-models/unet/model.onnx


     ✅ resnet50v1: PASSED: MIGraphX meets tolerance

🔴bert_base_cased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


❌bert_large: ERROR - check error outputTraceback (most recent call last):
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in
main()
File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 205, in main
model = migraphx.parse_onnx(model_name, default_dim_value=batch)
RuntimeError: /src/AMDMIGraphX/src/onnx/onnx_parser.cpp:264: parse_from: PARSE_FROM: Failed reading onnx file: /new-saved-models/bert/model.onnx


     ✅ yolov5s: PASSED: MIGraphX meets tolerance

     ✅ tinyllama: PASSED: MIGraphX meets tolerance

     ✅ vicuna-fastchat: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-encoder: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-decoder: PASSED: MIGraphX meets tolerance

     ✅ distilgpt2_fp16: PASSED: MIGraphX meets tolerance

Copy link

codecov bot commented May 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.78%. Comparing base (bc6c794) to head (f327b0a).

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #3035   +/-   ##
========================================
  Coverage    91.78%   91.78%           
========================================
  Files          485      485           
  Lines        18863    18865    +2     
========================================
+ Hits         17314    17316    +2     
  Misses        1549     1549           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@CharlieL7 CharlieL7 marked this pull request as ready for review May 9, 2024 17:02
@CharlieL7 CharlieL7 requested a review from causten as a code owner May 9, 2024 17:02
@CharlieL7 CharlieL7 requested review from bpickrel and umangyadav May 9, 2024 17:02
@CharlieL7 CharlieL7 added the simple small or simple changes label May 9, 2024
@causten causten merged commit 48b49ac into develop May 10, 2024
39 of 44 checks passed
@causten causten deleted the squeeze_dyn_dim_allow_compatible branch May 10, 2024 13:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
simple small or simple changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants