-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add FLUX e2e example #3619
base: develop
Are you sure you want to change the base?
Add FLUX e2e example #3619
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #3619 +/- ##
========================================
Coverage 92.18% 92.18%
========================================
Files 514 514
Lines 21780 21780
========================================
Hits 20078 20078
Misses 1702 1702 ☔ View full report in Codecov by Sentry. |
Can we add a reference image to this PR? Using the same prompt in the README |
@@ -0,0 +1,27 @@ | |||
## Setup | |||
|
|||
Make sure python interpreter can find migraphx. Default location: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add some instructions on setting up a python virtual environment?
Check results before merge 🔆 |
🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output |
Although, this prevents simplifying as much, it does help preserve the permutation of the broadcasted axes. So if we have a tensor of `{2, 16, 10240}` that goes into a reduction along the last axis it will output to `{2, 16, 1}`, which may be broadcasted back into `{2, 16, 10240}`, but there could be more shape transformations after the reduce but before an pointwise operator: ``` @1 = multibroadcast[out_lens={2, 16, 10240},out_dyn_dims={}](@0) -> int64_type, {2, 16, 10240}, {16, 1, 0} @2 = reshape[dims={2, 160, 32, 32}](@1) -> int64_type, {2, 160, 32, 32}, {163840, 1024, 32, 1} @3 = transpose[permutation={0, 2, 3, 1}](@2) -> int64_type, {2, 32, 32, 160}, {163840, 32, 1, 1024} ``` On develop this would be simplified to: ``` @1 = unsqueeze[axes={1, 2, 5},steps={}](@0) -> int64_type, {2, 1, 1, 16, 1, 1}, {16, 16, 16, 1, 1, 1} @2 = multibroadcast[out_lens={2, 1, 1, 16, 1, 10},out_dyn_dims={}](@1) -> int64_type, {2, 1, 1, 16, 1, 10}, {16, 16, 16, 1, 1, 0} @3 = reshape[dims={2, 1, 1, 160}](@2) -> int64_type, {2, 1, 1, 160}, {160, 160, 160, 1} @4 = multibroadcast[out_lens={2, 32, 32, 160},out_dyn_dims={}](@3) -> int64_type, {2, 32, 32, 160}, {160, 0, 0, 1} ``` Ideally, we would want to apply these transformations without the broadcast before the reduction but if it simplified like above because the shape_transform_descriptor doesnt track the permutation of the the broadcasted axes. With this PR, it will simplify to: ``` @1 = unsqueeze[axes={3, 4},steps={}](@0) -> int64_type, {2, 16, 1, 1, 1}, {16, 1, 1, 1, 1} @2 = transpose[permutation={0, 3, 4, 1, 2}](@1) -> int64_type, {2, 1, 1, 16, 1}, {16, 1, 1, 1, 1} @3 = multibroadcast[out_lens={2, 1, 1, 16, 10},out_dyn_dims={}](@2) -> int64_type, {2, 1, 1, 16, 10}, {16, 1, 1, 1, 0} @4 = reshape[dims={2, 1, 1, 160}](@3) -> int64_type, {2, 1, 1, 160}, {160, 160, 160, 1} @5 = multibroadcast[out_lens={2, 32, 32, 160},out_dyn_dims={}](@4) -> int64_type, {2, 32, 32, 160}, {160, 0, 0, 1} ``` This has a transpose because the shape_transform_descriptor understands how it will output in NHWC, which means we can make the input to the reduction NHWC layout as well. This PR doesn't enable such rewriting, it only modifies the shape_transform descriptor to track such layouts. Also, there is some updates to the tests as well: - Validate that a simplified transformation produces the same result - Check that the simplification cannot be simplified further
No description provided.