-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fuse contiguous and layout with pointwise #2889
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
umangyadav
requested review from
shivadbhavsar,
pfultz2 and
kahmed10
and removed request for
causten and
shivadbhavsar
March 15, 2024 14:24
umangyadav
force-pushed
the
fuse_contiguous_pointwise
branch
from
March 15, 2024 15:39
a739662
to
550bf0a
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #2889 +/- ##
========================================
Coverage 91.84% 91.84%
========================================
Files 478 478
Lines 18179 18179
========================================
Hits 16696 16696
Misses 1483 1483 ☔ View full report in Codecov by Sentry. |
This build is not recommended to merge 🔴 |
❌bert-mrpc-onnx: ERROR - check error outputTraceback (most recent call last):File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in main() File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 205, in main model = migraphx.parse_onnx(model_name, default_dim_value=batch) RuntimeError: /src/AMDMIGraphX/src/onnx/onnx_parser.cpp:264: parse_from: PARSE_FROM: Failed reading onnx file: /new-saved-models/huggingface-transformers/bert_mrpc1.onnx ❌cadene-resnext101_1: ERROR - check error output2024-03-18 15:31:05.762634355 [W:onnxruntime:, model.cc:183 Model] ONNX Runtime only guarantees support for models stamped with opset version 7 or above for opset domain 'ai.onnx'. Please upgrade your model to opset 7 or higher. For now, this opset 6 model may run depending upon legacy support of some older opset version operators.2024-03-18 15:31:05.769372695 [W:onnxruntime:, transpose_optimizer.cc:28 ApplyImpl] Transpose optimizer failed: Unsupported ONNX opset: 6 Traceback (most recent call last): File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in main() File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 267, in main sess = ort.InferenceSession(model_name, File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 463, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for BatchNormalization(6) node with name '' ❌unet: ERROR - check error outputTraceback (most recent call last):File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in main() File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 207, in main model = migraphx.parse_onnx(model_name, RuntimeError: /src/AMDMIGraphX/src/onnx/onnx_parser.cpp:264: parse_from: PARSE_FROM: Failed reading onnx file: /new-saved-models/unet/model.onnx 🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output❌bert_large: ERROR - check error outputTraceback (most recent call last):File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 340, in main() File "/src/AMDMIGraphX/tools/accuracy/accuracy_checker.py", line 205, in main model = migraphx.parse_onnx(model_name, default_dim_value=batch) RuntimeError: /src/AMDMIGraphX/src/onnx/onnx_parser.cpp:264: parse_from: PARSE_FROM: Failed reading onnx file: /new-saved-models/bert/model.onnx |
pfultz2
approved these changes
Mar 18, 2024
causten
approved these changes
Mar 18, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
rocBLAS can not handle batch dimensions that are not collapsible.
Because of that it requires contiguous operations for the inputs.
eliminate_contiguous
pass will not remove such contiguous.If such input is coming from pointwise op then contiguous can be fused with pointwise.