-
-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run inference with dynamic shape model ? #16
Comments
Hey - this is currently not supported in Fast TFLite. Shouldn't be too tricky to add though |
Do you plan to add this or do you have a quick tip to self-implement it? I'm not that familiar with C++ and didn't find where the input tensors are fed into the interpreter/where the tensors are allocated. Thanks already so much! |
The link from the docs should be a hint on how to get started, I personally don't have any plans to implement this right now unless someone pays me to :) |
I managed to fixed this using this patch: diff --git a/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp b/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
index fbdc44f..81372c7 100644
--- a/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
+++ b/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
@@ -170,13 +171,6 @@ TensorflowPlugin::TensorflowPlugin(TfLiteInterpreter* interpreter, Buffer model,
std::shared_ptr<react::CallInvoker> callInvoker)
: _interpreter(interpreter), _delegate(delegate), _model(model), _callInvoker(callInvoker) {
// Allocate memory for the model's input/output `TFLTensor`s.
- TfLiteStatus status = TfLiteInterpreterAllocateTensors(_interpreter);
- if (status != kTfLiteOk) {
- [[unlikely]];
- throw std::runtime_error("Failed to allocate memory for input/output tensors! Status: " +
- tfLiteStatusToString(status));
- }
-
log("Successfully created Tensorflow Plugin!");
}
@@ -213,9 +207,17 @@ void TensorflowPlugin::copyInputBuffers(jsi::Runtime& runtime, jsi::Object input
}
for (size_t i = 0; i < count; i++) {
- TfLiteTensor* tensor = TfLiteInterpreterGetInputTensor(_interpreter, i);
auto value = array.getValueAtIndex(runtime, i);
auto inputBuffer = getTypedArray(runtime, value.asObject(runtime));
+ int inputDimensions[] = {static_cast<int>(inputBuffer.length(runtime))};
+ TfLiteInterpreterResizeInputTensor(_interpreter, i, inputDimensions, 1);
+ TfLiteStatus status = TfLiteInterpreterAllocateTensors(_interpreter);
+ if (status != kTfLiteOk) {
+ [[unlikely]];
+ throw std::runtime_error("Failed to allocate memory for input/output tensors! Status: " +
+ tfLiteStatusToString(status));
+ }
+ TfLiteTensor* tensor = TfLiteInterpreterGetInputTensor(_interpreter, i);
TensorHelpers::updateTensorFromJSBuffer(runtime, tensor, inputBuffer);
}
}
@@ -230,6 +232,7 @@ jsi::Value TensorflowPlugin::copyOutputBuffers(jsi::Runtime& runtime) {
TensorHelpers::updateJSBufferFromTensor(runtime, *outputBuffer, outputTensor);
result.setValueAtIndex(runtime, i, *outputBuffer);
}
+
return result;
} However, this solution has some drawbacks: Only one-dimensional input data for each input tensor is supported, as I wasn't able to determine the dimensions of the inputBuffer. |
Interesting - yea maybe we can expose this differently by making the tensors in const model = loadTensorFlowModel(..)
model.inputs[0].size = 4 // resize it Or somthing like const model = loadTensorFlowModel(..)
model.resizeInputTensors([4]) |
any updates on this? |
Hi bro,
I have a model that supports dynamic shape, but tflite does not support it when converting, but it does support it from the code.
Doc from TFL
https://www.tensorflow.org/lite/guide/inference#run_inference_with_dynamic_shape_model
How to operate with react-native-fast-tflite
The text was updated successfully, but these errors were encountered: