Does llama.cpp support NNAPI acceleration? #7216
thisisfangsheng
started this conversation in
General
Replies: 1 comment
-
Not yet. See #2687 for tracking. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As title. Because my hardware has already adapted the NNAPI, I just wanna know if llama.cpp could also benefit from it. Many thanks.
Beta Was this translation helpful? Give feedback.
All reactions