Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification Needed on TensorFlow Lite Version and GPU Acceleration Configuration #83

Open
bglgwyng opened this issue Aug 20, 2024 · 12 comments

Comments

@bglgwyng
Copy link

Thank you for developing this excellent library. I've been using the native TensorFlow Lite library directly with Swift/Kotlin and am now migrating my code to use react-native-fast-tflite. Having experience configuring TensorFlow Lite, I have a couple of questions regarding the Android configuration:

  1. Version Selection: Why does this library use an older version of TensorFlow Lite (2.12.0) instead of the latest version (2.16.1)? Is there a specific reason for pinning to this older version?

  2. GPU Acceleration Configuration: I noticed that 'uses-native-library' declarations are required to enable GPU acceleration. While I understand their necessity (as GPU loading fails without them), I'm curious about why they're needed here when they weren't required in my previous direct TensorFlow Lite implementations.

For context, when I previously installed TensorFlow Lite directly, the following Gradle dependencies were sufficient for GPU acceleration:

implementation 'org.tensorflow:tensorflow-lite:2.15.0'
implementation 'org.tensorflow:tensorflow-lite-gpu:2.15.0'
implementation 'org.tensorflow:tensorflow-lite-gpu-api:2.15.0'

Is the need for these additional configurations related to the older TensorFlow Lite version (2.12.0) used in this library? Or am I missing something?

Copy link

Guten Tag, Hans here! 🍻

Thank you for your kind words and for sponsoring the repo! Your questions are quite relevant.

  1. Regarding ze TensorFlow Lite version, mrousavy may be using an older version for compatibility reasons with react-native-fast-tflite. This helps ensure stability and avoids potential breaking changes. It's always good to check ze changelogs for updates.

  2. As for ze GPU acceleration configuration, differences can arise between library implementations and direct TensorFlow Lite usage. It's possible that our library requires additional declarations for proper initialization.

If you have further questions or specific logs related to issues while using this library, feel free to provide ze details, and we can investigate more!

Note: If you think I made a mistake, please ping @mrousavy to take a look.

@mrousavy
Copy link
Owner

Hey @bglgwyng - thank you for the kind words.

Well, to be honest I don't have an answer to your questions. I built this library a while ago fairly quickly, and I didn't investigate GPU delegates any further. It worked, but I didn't use this library for any of my RN apps.
I know a lot of people use react-native-fast-tflite (and even a lot of my clients use it), but I haven't done any significant changes here since it's release.

I'd appreciate it if you find answers to those questions after just trying stuff out, maybe we can bump the version to latest, and maybe we can simplify the GPU delegate. PRs of course welcome!

Thanks!

@mrousavy
Copy link
Owner

iirc I tried to integrate TFLite by source, to avoid having to depend on a pod or gradle input. By source was a bit too complicated because I couldn't get it to build. That would've also worked with Simulators, and is a shared C++ codebase. I haven't pursued this further, but would be cool if you could get that working

@bglgwyng
Copy link
Author

@TkTioNG Hello! I found that you are the author of this PR
Here I asked some questions related to the PR, could you please provide answers for them?

@TkTioNG
Copy link
Contributor

TkTioNG commented Aug 21, 2024

@bglgwyng Hi!

OpenCL is not necessarily compulsory to enable GPU acceleration. May be you can try to compile your app without using it?
I believe that without using OpenCL, the device will try to use the default graphics api instead (I cannot guarantee you with that XD)

Just that some of the tflite model operations, and some configurations (like serialization) will need OpenCL, where the default graphics api cannot provide and will failed to be initialized.

@bglgwyng
Copy link
Author

@TkTioNG Thanks for the brief reply! What about the version? Do you think it is ok to bump it up?

@mrousavy
Copy link
Owner

#85

@lucksp
Copy link

lucksp commented Dec 11, 2024

@bglgwyng did you find any solution? I also cannot load any model with GPU enable flag on iOS on models trained after Sept 2024 (approx date).

@bglgwyng
Copy link
Author

@lucksp #85 did you try the latest version? I'm using an old model and my problem may not be the same with yours.

@lucksp
Copy link

lucksp commented Jan 5, 2025

@lucksp #85 did you try the latest version? I'm using an old model and my problem may not be the same with yours.

I have tried many things. On iOS models trained after August 2024 fail to load on iOS if I enable CoreML delegate. So I’m a bit stuck to say the least.

@bglgwyng
Copy link
Author

bglgwyng commented Jan 6, 2025

I don't know about ML file formats, but I think that you need to specify your model file than 'trained after August 2024'.

@lucksp
Copy link

lucksp commented Jan 6, 2025

I don't know about ML file formats, but I think that you need to specify your model file than 'trained after August 2024'.

@bglgwyng It’s a TFLite file, trained in Vertex. The model I trained and exported prior to August works with iOS delegate flag. The model trained after does not work with this library anymore when delegate flag enabled. And I have tried other trainings too with the same result. So either Google changed how they export TFLite or a bug was introduced to this package or both? I don’t know how to debug.
id be happy to provide a test model TFLite file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants