-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tracking] ONNX Op Support #215
Comments
The list high priority ops: https://gist.github.com/renxida/7510f2a0e5b1e7f0b62025f70854c553 (moved to Gist by @renxida to avoid interfering with ctrl+f) |
Working on the MatMul changes right now. |
Working on Gather, LeakyRelu and Pad op. |
Can you please create issues for these ops, so that no one else takes that op? |
@godot73 @frafranz @renxida @frederik-h @rsuderman Please create an issue for the op you're working on right now, and do the same if you take further any ops. |
|
@saienduri working on these two right now: |
@vivekkhandelwal1 According to the triage of iree onnx tests failures, these ops are missing OnnxToTorch lowering.
Seems the listed ops are ordered by priority, I'm not sure how we would prioritize these missing ops. Could you add them to the list? |
I have an ONNX model that uses the Unique operator that is not yet supported, can I take the issue and implement it? |
@Peefy yes please! |
Classical ML and Training ops (not planned to be supported):
|
I'll take group normalization. |
I am working on ReduceSumSquare, ReduceLogSum and ReduceLogSumExp op |
Working on GlobalMaxPool |
Working on Onnx.Multinomial (Not sure how to set up tracking) |
I noticed that some ONNX operators are functions, which means that we can probably systematically expand them before conversion, instead of having to write bespoke conversions for all of them. I made an issue about this: llvm/torch-mlir#3384. Assuming this does actually get implemented, it might be wise for people considering implementing new conversions to avoid operators that are functions, if it is desirable to avoid redundant effort. You can tell if an operator is a function by going to https://onnx.ai/onnx/operators/ and seeing if it says "function: |
Working on Onnx.Scatter |
Working on #717 |
I'll take |
Update on the operators that are functions thing: as of llvm/torch-mlir@51902ec, support for this is in the importer. This means that if there's a new op to support and the ONNX documentation says "Function: See also: llvm/torch-mlir#3464. |
Bernoulli might be implemented wrong: llvm/torch-mlir#3527 |
Hello, could someone clarify what is the rationale for picking a certain opset for a given op? For example, I see Softmax is ticked off as done here, but actually only opset version 13 is supported. Are the supported opset versions documented somewhere I might've missed? |
Depends on the model requirement. It means when it fixed, probably only 13 needed, then when the next models need 19, we prioritize to support 19, if not, we can just leave it there for now, and prioritize other ops not implemented yet. If no model driven, when a new op implemented, we try to support the state of art ONNX opset version. But there is a tradeoff, if new version support takes too much time, we always pick up the low handing fruit first. |
Tracker #797 for the Onnx ops failing during Torch->Linalg lowering. |
Tracking op support for OnnxToTorch lowering. The ops are ordered by priority (highest -> lowest) in each section.
IMPORTANT: Mark ops you are working on by hovering over it in the list, then clicking the bullseye symbol to its right. This creates an issue for the op and marks them on the list, avoiding duplicate effort.
Contact:
For people in turbine camp, feel free to pick an op from any of the alphabetical subgroups.
Instructions on adding ONNX or Torch OP:
https://github.com/llvm/torch-mlir/blob/main/docs/add_ops.md
Please add e2e operator level test(s) to e2eshark test suite for your newly added ops using instructions at:
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/README.md
If you have questions, please check and ask in the discord LLVM/torch-mlir channel.
For TorchToLinAlg support tracking, see #347
[Tracker] Onnx FE Support #564
@kumardeepakamd 's Guidance for prioritizing the ONNX work (higher to lower priority order):
Unsupported Ops (not planned to be supported) - count:5
Need revisit
Completed Ops (count: 188):
onnx.tan
to torch dialect #235onnx.transpose
to torch #238onnx.selu
to torch dialect #236The text was updated successfully, but these errors were encountered: