-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
don't canonicalize to SqueezeV11, UnsqueezeV11 #2515
don't canonicalize to SqueezeV11, UnsqueezeV11 #2515
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The patch looks good. I see SqueezeV11 is still used in other places such as in the lowering of RNN, in onnx-to-zhigh, and onnx-to-krnl. I wonder if you have time to remove it totally (e.g. in onnx-to-zhigh) in this patch also.
auto resultType = UnrankedTensorType::get(data.getType().cast<ShapedType>().getElementType()); | ||
build($_builder, $_state, resultType, data, axes); | ||
}]>, | ||
OpBuilder<(ins "ValueRange":$operands, "ArrayRef<NamedAttribute>":$attributes), [{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks redundant, the last Unsqueeze does not have attributes: https://onnx.ai/onnx/operators/onnx__Unsqueeze.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gen_onnx_mlir.py generated this automatically when I added Unsqueeze to custom_builder_unranked_ops_list (which was needed in order to create Unsqueeze in tablegen patterns, where Unsqueeze is created without result type)
we could change gen_onnx_mlir.py to not create this extra builder (which seems unnecessary for all the ops in custom_builder_unranked_ops_list) but it's somewhat unrelated to this PR so let's do that in a separate PR if we want to make that cleanup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I see. It's an issue in gen_onnx_mlir.py
. Thanks for clarification!
Signed-off-by: Soren Lassen <[email protected]>
Signed-off-by: Soren Lassen <[email protected]>
Signed-off-by: Soren Lassen <[email protected]>
ac835c1
to
1fa30d2
Compare
I don't have the time, sorry I tried onnx-to-zhigh but the lit test changes are too much - my implementation is here if you want to take it over: sorenlassen@bdd701f onnx-to-krnl is too involved for me ('emitIntermediateIR' etc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Jenkins Linux amd64 Build #12737 [push] dont canonicalize to Squ... started at 08:59 |
Jenkins Linux s390x Build #12760 [push] dont canonicalize to Squ... started at 09:59 |
Jenkins Linux ppc64le Build #11753 [push] dont canonicalize to Squ... started at 10:08 |
Jenkins Linux amd64 Build #12737 [push] dont canonicalize to Squ... passed after 1 hr 32 min |
Jenkins Linux s390x Build #12760 [push] dont canonicalize to Squ... passed after 1 hr 33 min |
Jenkins Linux ppc64le Build #11753 [push] dont canonicalize to Squ... passed after 1 hr 46 min |
With this change it should be the case that no pattern introduces ops that are decomposed in DecomposeONNXToONNXPass and therefore they don't undo the work of that pass.
This will avoid unnecessary chains of patterns in the hybrid pass with decomposition (PR #2496). While chains of patterns should work, without them it is simpler to understand the interplay between sets of pattens.