Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run inference with dynamic shape model ? #16

Open
phamquyhai opened this issue Jan 8, 2024 · 6 comments
Open

How to run inference with dynamic shape model ? #16

phamquyhai opened this issue Jan 8, 2024 · 6 comments

Comments

@phamquyhai
Copy link

Hi bro,
I have a model that supports dynamic shape, but tflite does not support it when converting, but it does support it from the code.

Doc from TFL
https://www.tensorflow.org/lite/guide/inference#run_inference_with_dynamic_shape_model

If you want to run a model with dynamic input shape, resize the input shape before running inference. Otherwise, the None shape in Tensorflow models will be replaced by a placeholder of 1 in TFLite models.

The following examples show how to resize the input shape before running inference in different languages. All the examples assume that the input shape is defined as [1/None, 10], and need to be resized to [3, 10].

// Resize input tensors before allocate tensors
interpreter->ResizeInputTensor(/*tensor_index=*/0, std::vector<int>{3,10});
interpreter->AllocateTensors();

How to operate with react-native-fast-tflite

@mrousavy
Copy link
Owner

mrousavy commented Jan 9, 2024

Hey - this is currently not supported in Fast TFLite. Shouldn't be too tricky to add though

@mbpictures
Copy link

Do you plan to add this or do you have a quick tip to self-implement it? I'm not that familiar with C++ and didn't find where the input tensors are fed into the interpreter/where the tensors are allocated. Thanks already so much!

@mrousavy
Copy link
Owner

The link from the docs should be a hint on how to get started, I personally don't have any plans to implement this right now unless someone pays me to :)

@mbpictures
Copy link

mbpictures commented Jan 25, 2024

I managed to fixed this using this patch:

diff --git a/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp b/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
index fbdc44f..81372c7 100644
--- a/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
+++ b/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
@@ -170,13 +171,6 @@ TensorflowPlugin::TensorflowPlugin(TfLiteInterpreter* interpreter, Buffer model,
                                    std::shared_ptr<react::CallInvoker> callInvoker)
     : _interpreter(interpreter), _delegate(delegate), _model(model), _callInvoker(callInvoker) {
   // Allocate memory for the model's input/output `TFLTensor`s.
-  TfLiteStatus status = TfLiteInterpreterAllocateTensors(_interpreter);
-  if (status != kTfLiteOk) {
-    [[unlikely]];
-    throw std::runtime_error("Failed to allocate memory for input/output tensors! Status: " +
-                             tfLiteStatusToString(status));
-  }
-
   log("Successfully created Tensorflow Plugin!");
 }
@@ -213,9 +207,17 @@ void TensorflowPlugin::copyInputBuffers(jsi::Runtime& runtime, jsi::Object input
   }
 
   for (size_t i = 0; i < count; i++) {
-    TfLiteTensor* tensor = TfLiteInterpreterGetInputTensor(_interpreter, i);
     auto value = array.getValueAtIndex(runtime, i);
     auto inputBuffer = getTypedArray(runtime, value.asObject(runtime));
+    int inputDimensions[] = {static_cast<int>(inputBuffer.length(runtime))};
+    TfLiteInterpreterResizeInputTensor(_interpreter, i, inputDimensions, 1);
+    TfLiteStatus status = TfLiteInterpreterAllocateTensors(_interpreter);
+    if (status != kTfLiteOk) {
+      [[unlikely]];
+      throw std::runtime_error("Failed to allocate memory for input/output tensors! Status: " +
+                               tfLiteStatusToString(status));
+    }
+    TfLiteTensor* tensor = TfLiteInterpreterGetInputTensor(_interpreter, i);
     TensorHelpers::updateTensorFromJSBuffer(runtime, tensor, inputBuffer);
   }
 }
@@ -230,6 +232,7 @@ jsi::Value TensorflowPlugin::copyOutputBuffers(jsi::Runtime& runtime) {
     TensorHelpers::updateJSBufferFromTensor(runtime, *outputBuffer, outputTensor);
     result.setValueAtIndex(runtime, i, *outputBuffer);
   }
+
   return result;
 }

However, this solution has some drawbacks: Only one-dimensional input data for each input tensor is supported, as I wasn't able to determine the dimensions of the inputBuffer.

@mrousavy
Copy link
Owner

Interesting - yea maybe we can expose this differently by making the tensors in inputs read-write as well? Not sure if that'd be a good API design as it's probably not what users expect:

const model = loadTensorFlowModel(..)
model.inputs[0].size = 4 // resize it

Or somthing like

const model = loadTensorFlowModel(..)
model.resizeInputTensors([4])

@RayanMoarkech
Copy link

any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants