-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tracker] Onnx FE Support #564
Comments
Owners, kindly add clear steps to reproduce failures and allow ability for contributors to take up a unique issue and work on fix to have more folks join in for this quality push. Great start! Let's do it. |
The IREE-EP related efforts are being tracked here: https://github.com/nod-ai/npu-benchmark/issues/2 |
This issue is for the purpose of tracking all the ONNX Frontend Requirements.
Instructions for finding the models/setup:
Important Links
ONNX model
Passing Summary
CPU
TOTAL TESTS = 2338
GPU
TOTAL TESTS = 2338
Fail Summary
CPU and GPU
TOTAL TESTS = 2338
Latest Status (Inference Pass/ Compile Pass/Total)
The Onnx lowering Lit Tests
View the op name from the tracker and then take out the lit test corresponding to that op in a seperate file, and run:
Torch Op E2E Tests of torch-mlir
Take out the E2E test from the tracker and run:
ONNX Op Shark_TestSuite/iree_tests
Compile time Tests - #563
Runtime Tests - #583
To run the test, please follow:
build venv following here and run
Models Shark_TestSuite/e2eshark
The E2EShark Model Tests are tracked through #566
First, follow setup instructions at https://github.com/nod-ai/SHARK-TestSuite/tree/main/e2eshark. No need to do the Turbine setup part as we are looking at onnx mode. Then, run this command (HF_TOKEN needed for llama, gemma model):
The text was updated successfully, but these errors were encountered: