-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Delivery] Win ARM64 wheels + QNN #19162
Comments
Thanks for filing these issues. |
Is the runtime-extensions will be included? |
onnxruntime-extensions is a separate project/package. I see you've already filed an issue at microsoft/onnxruntime-extensions#624 |
see https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html#pre-built-packages for ort-qnn-nightly python package installation instructions |
pip install onnx runs cleanly on Win/ARM devices. with recent ONNX versions (e.g. ONNX 1.15) |
@jywu not here 😢 I can't for the life of me get it to work.
This is with Python 3.11.9. Which Python version were you using? |
you're right. this was previously working. something must have broken with onnx build on arm64. we will need to follow up and report to onnx project |
@jywu I'm not a Python dev.. Do you think that this could be a build issue. For instance, is it possible to try to build it with Ninja instead of MSBuild and see if it works? |
yes, it's a build issue. the root problem is that onnx project doesn't publish python packages for win/arm64 platform to pypi. |
Hi @khmyznikov Qualcomm NPU (QNN-EP) with ONNX Runtime should be feasible. However, the lack of pre-built ARM64 libraries and binaries can indeed create significant obstacles. Ensuring that your environment paths are correctly set and that you have the necessary dependencies installed is crucial. For those looking to integrate QNN-EP, make sure to follow the installation instructions provided in the ONNX Runtime documentation. This setup has been effective for other models available from the Qualcomm AI Hub. Thankyou |
Describe the feature request
Creating general issue to track ONNX DX on Windows ARM platform, particularly QNN.
Main problems:
Lack of pre-build arm libs/binaries/wheels.
The necessity of building from source even our own tools create huge obstacles and difficulties for DX. Launching basic SDK demos and examples like this requires a lot of time and mental resources. Days at best, instead of minutes or hours at worse.
Necessity to mix/switch x64 and Arm environments to run basic demos.
This is partially a result of previous problem. Qualcomm SDK and onnxruntime demos for QNN are not arm friendly. Some demos to the point of impossibility of execution on arm machine even with x64 emulation (see lack of AVX instructions emulation).
Feels like a development kit rather than a development device.
Due to the above issues, the DX with Python and ML feels like you are still using an experimental device and requires an additional x64 machine for development. Which is not the case for example for NodeJS/Web development, because everything is cross-platform or arm native already.
Opportunities to improve current state:
This can be a backup plan to cover some gaps can't be covered by ourself.
Describe scenario use case
The ability to use Windows ARM64 as main machine for ML development, not only as inference target.
The text was updated successfully, but these errors were encountered: