-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tracking] Model and Op Support #119
Comments
@stellaraccident in meeting chat: If you are actively working on one, please click the "target" hover to the right of any task to create an issue. Then assign yourself. Note the issue in PRs. Discuss prioritization on the tracking issue and details of the op on the op issue. |
The list of ops in the issue description is the same as the one @saienduri gathered (in increasing order of appearance, i.e. later ops more important) from parity-bench (I think using https://github.com/nod-ai/SHARK-Turbine/blob/main/tests/generated/running_tests.md) and shared with me and Avinash yesterday. Someone on the call today also mentioned pixel_shuffle as an important op to support. |
@stellaraccident I'm not sure I've done what you had in mind - I created a new issue by clicking "New issue" (top right of this page, for me at least). I don't seem to have permission to assign myself to it though |
You should now have an invite to the organization and I think I added you to a team such that you have write access to the repo. |
@stellaraccident I plan to implemente torch.aten.replication_pad2d op, and created the following issue for that. But I cannot link the op to the created issue. Maybe I missed something. |
Opened a issue tracker for the op torch.aten.diag_embed. @stellaraccident Can you help link it from the list above? |
@stellaraccident , I had taken up torch.aten.acos (#293). But, @schnkmwt pointed out that https://github.com/frederik-h has taken that up as llvm/torch-mlir#2604 . So, maybe link the op to that issue. I can take up a different one: reflection_pad1d. Please link #293 to that. |
@kumardeepakamd I think this op is already being implemented: llvm/torch-mlir#2604 |
For ALL new contributors, let's use this to track your newly implemented ops. [tracking] TorchToLinalg and ONNX Op Support #215 |
Close this issue temporarily so we focus on #215 . |
Tracking model burndown for Gen-AI models and variants that we seek to be serving via Turbine.
Dynamic shaped llama2
SHARK Model Porting
Priority op requests
1D Convolution op #210
Strange behavior while lowering nn.BatchNorm2d #110 @AmosLewis
OPS to linalg in llama_test when Bump iree to 20231130.724 #212:
General torch-mlir op support
ONNX op support
[tracking] ONNX Op Support #215
The text was updated successfully, but these errors were encountered: