-
-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification Needed on TensorFlow Lite Version and GPU Acceleration Configuration #83
Comments
Guten Tag, Hans here! 🍻 Thank you for your kind words and for sponsoring the repo! Your questions are quite relevant.
If you have further questions or specific logs related to issues while using this library, feel free to provide ze details, and we can investigate more!
|
Hey @bglgwyng - thank you for the kind words. Well, to be honest I don't have an answer to your questions. I built this library a while ago fairly quickly, and I didn't investigate GPU delegates any further. It worked, but I didn't use this library for any of my RN apps. I'd appreciate it if you find answers to those questions after just trying stuff out, maybe we can bump the version to latest, and maybe we can simplify the GPU delegate. PRs of course welcome! Thanks! |
iirc I tried to integrate TFLite by source, to avoid having to depend on a pod or gradle input. By source was a bit too complicated because I couldn't get it to build. That would've also worked with Simulators, and is a shared C++ codebase. I haven't pursued this further, but would be cool if you could get that working |
@bglgwyng Hi! OpenCL is not necessarily compulsory to enable GPU acceleration. May be you can try to compile your app without using it? Just that some of the tflite model operations, and some configurations (like serialization) will need OpenCL, where the default graphics api cannot provide and will failed to be initialized. |
@TkTioNG Thanks for the brief reply! What about the version? Do you think it is ok to bump it up? |
@bglgwyng did you find any solution? I also cannot load any model with GPU enable flag on iOS on models trained after Sept 2024 (approx date). |
I don't know about ML file formats, but I think that you need to specify your model file than 'trained after August 2024'. |
@bglgwyng It’s a TFLite file, trained in Vertex. The model I trained and exported prior to August works with iOS delegate flag. The model trained after does not work with this library anymore when delegate flag enabled. And I have tried other trainings too with the same result. So either Google changed how they export TFLite or a bug was introduced to this package or both? I don’t know how to debug. |
Thank you for developing this excellent library. I've been using the native TensorFlow Lite library directly with Swift/Kotlin and am now migrating my code to use react-native-fast-tflite. Having experience configuring TensorFlow Lite, I have a couple of questions regarding the Android configuration:
Version Selection: Why does this library use an older version of TensorFlow Lite (2.12.0) instead of the latest version (2.16.1)? Is there a specific reason for pinning to this older version?
GPU Acceleration Configuration: I noticed that 'uses-native-library' declarations are required to enable GPU acceleration. While I understand their necessity (as GPU loading fails without them), I'm curious about why they're needed here when they weren't required in my previous direct TensorFlow Lite implementations.
For context, when I previously installed TensorFlow Lite directly, the following Gradle dependencies were sufficient for GPU acceleration:
Is the need for these additional configurations related to the older TensorFlow Lite version (2.12.0) used in this library? Or am I missing something?
The text was updated successfully, but these errors were encountered: