-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image postprocessor #720
base: master
Are you sure you want to change the base?
Image postprocessor #720
Conversation
tensorflow_lite_support/cc/task/processor/image_postprocessor.h
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.h
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
const tflite::TensorMetadata* output_metadata = | ||
engine_->metadata_extractor()->GetOutputTensorMetadata( | ||
tensor_indices_.at(0)); | ||
const tflite::TensorMetadata* input_metadata = | ||
engine_->metadata_extractor()->GetInputTensorMetadata( | ||
input_indices.at(0)); | ||
|
||
// Use input metadata for normalization as fallback. | ||
const tflite::TensorMetadata* processing_metadata = | ||
GetNormalizationOptionsIfAny(*output_metadata).value().has_value() | ||
? output_metadata | ||
: input_metadata; | ||
|
||
absl::optional<vision::NormalizationOptions> normalization_options; | ||
ASSIGN_OR_RETURN(normalization_options, | ||
GetNormalizationOptionsIfAny(*processing_metadata)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify this by something like:
ASSIGN_OR_RETURN(auto normalization_options, GetNormalizationOptionsIfAny(*GetTensorMetadata()));
if (!normalization_options.has_value() && input_index>-1) {
ASSIGN_OR_RETURN(normalization_options, GetNormalizationOptionsIfAny(*input_metadata));
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice one! 👍
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
} else if (is_input && metadata_extractor.GetInputTensorCount() != 1) { | ||
return CreateStatusWithPayload( | ||
StatusCode::kInvalidArgument, | ||
"Models are assumed to have a single input TensorMetadata.", | ||
TfLiteSupportStatus::kInvalidNumInputTensorsError); | ||
} else if (!is_input && metadata_extractor.GetOutputTensorCount() != 1) { | ||
return CreateStatusWithPayload( | ||
StatusCode::kInvalidArgument, | ||
"Models are assumed to have a single output TensorMetadata.", | ||
TfLiteSupportStatus::kInvalidNumOutputTensorsError); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will break if we use ImagePreprocessor and ImagePostprocessor on a model with multiple inputs. We should remove these check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metadata_extractor.GetOutputTensorCount()
returns the output tensor count of the tflite model, which can be greater than one. For example, a model may takes two input image tensors. You can bind an ImagePreprocessor to each of the input image tensor, respectively, but not both input tensors to one ImagePreprocessor (that's what we've agreed in ImagePre(/Post)processor). However, the code will fail here, because metadata_extractor.GetOutputTensorCount()
returns 2.
This is a bug in our existing code, it's not discovered before, because all models we supported so far have only one input tensor of image. But we need to correct the logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if you pass in TensorMetadata
, it will be garnered that the number of tensor is 1 for the ImagePostprocessor.
ASSIGN_OR_RETURN(const TensorMetadata* metadata, | ||
GetInputTensorMetadataIfAny(metadata_extractor)); | ||
GetTensorMetadataIfAny(metadata_extractor, is_input)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should get TensorMetadata in ImagePreprocessor and ImagePostprocessor respectively, and pass in TensorMetadata instead of metadata_extractor into BuildImageTensorSpecs
. This helps to avoid the if else
statements in GetTensorMetadataIfAny
to check input or output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lu-wang-g Okay but we need to use GetTensorMetadataIfAny
for these checks:
if (metadata_extractor.GetModelMetadata() == nullptr ||
metadata_extractor.GetModelMetadata()->subgraph_metadata() == nullptr)
and it requires metadata_extractor
. So probably we should keep it?
tensorflow_lite_support/cc/task/vision/utils/image_tensor_specs.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/vision/utils/image_tensor_specs.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
} else if (is_input && metadata_extractor.GetInputTensorCount() != 1) { | ||
return CreateStatusWithPayload( | ||
StatusCode::kInvalidArgument, | ||
"Models are assumed to have a single input TensorMetadata.", | ||
TfLiteSupportStatus::kInvalidNumInputTensorsError); | ||
} else if (!is_input && metadata_extractor.GetOutputTensorCount() != 1) { | ||
return CreateStatusWithPayload( | ||
StatusCode::kInvalidArgument, | ||
"Models are assumed to have a single output TensorMetadata.", | ||
TfLiteSupportStatus::kInvalidNumOutputTensorsError); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if you pass in TensorMetadata
, it will be garnered that the number of tensor is 1 for the ImagePostprocessor.
tensorflow_lite_support/cc/task/processor/image_postprocessor.cc
Outdated
Show resolved
Hide resolved
Made substantial changes. To summarize: a) I'm storing TODO: Also need to work on some docs, any suggestions? |
Continuation of #679. @lu-wang-g