Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image postprocessor #720

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open

Conversation

jonpsy
Copy link
Contributor

@jonpsy jonpsy commented Dec 3, 2021

Continuation of #679. @lu-wang-g

Comment on lines 129 to 144
const tflite::TensorMetadata* output_metadata =
engine_->metadata_extractor()->GetOutputTensorMetadata(
tensor_indices_.at(0));
const tflite::TensorMetadata* input_metadata =
engine_->metadata_extractor()->GetInputTensorMetadata(
input_indices.at(0));

// Use input metadata for normalization as fallback.
const tflite::TensorMetadata* processing_metadata =
GetNormalizationOptionsIfAny(*output_metadata).value().has_value()
? output_metadata
: input_metadata;

absl::optional<vision::NormalizationOptions> normalization_options;
ASSIGN_OR_RETURN(normalization_options,
GetNormalizationOptionsIfAny(*processing_metadata));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify this by something like:

ASSIGN_OR_RETURN(auto normalization_options, GetNormalizationOptionsIfAny(*GetTensorMetadata()));

if (!normalization_options.has_value() && input_index>-1) {
  ASSIGN_OR_RETURN(normalization_options, GetNormalizationOptionsIfAny(*input_metadata));
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice one! 👍

Comment on lines 48 to 57
} else if (is_input && metadata_extractor.GetInputTensorCount() != 1) {
return CreateStatusWithPayload(
StatusCode::kInvalidArgument,
"Models are assumed to have a single input TensorMetadata.",
TfLiteSupportStatus::kInvalidNumInputTensorsError);
} else if (!is_input && metadata_extractor.GetOutputTensorCount() != 1) {
return CreateStatusWithPayload(
StatusCode::kInvalidArgument,
"Models are assumed to have a single output TensorMetadata.",
TfLiteSupportStatus::kInvalidNumOutputTensorsError);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will break if we use ImagePreprocessor and ImagePostprocessor on a model with multiple inputs. We should remove these check.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metadata_extractor.GetOutputTensorCount() returns the output tensor count of the tflite model, which can be greater than one. For example, a model may takes two input image tensors. You can bind an ImagePreprocessor to each of the input image tensor, respectively, but not both input tensors to one ImagePreprocessor (that's what we've agreed in ImagePre(/Post)processor). However, the code will fail here, because metadata_extractor.GetOutputTensorCount() returns 2.

This is a bug in our existing code, it's not discovered before, because all models we supported so far have only one input tensor of image. But we need to correct the logic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if you pass in TensorMetadata, it will be garnered that the number of tensor is 1 for the ImagePostprocessor.

ASSIGN_OR_RETURN(const TensorMetadata* metadata,
GetInputTensorMetadataIfAny(metadata_extractor));
GetTensorMetadataIfAny(metadata_extractor, is_input));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should get TensorMetadata in ImagePreprocessor and ImagePostprocessor respectively, and pass in TensorMetadata instead of metadata_extractor into BuildImageTensorSpecs. This helps to avoid the if else statements in GetTensorMetadataIfAny to check input or output.

Copy link
Contributor Author

@jonpsy jonpsy Dec 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lu-wang-g Okay but we need to use GetTensorMetadataIfAny for these checks:

  if (metadata_extractor.GetModelMetadata() == nullptr ||
      metadata_extractor.GetModelMetadata()->subgraph_metadata() == nullptr) 

and it requires metadata_extractor. So probably we should keep it?

Comment on lines 48 to 57
} else if (is_input && metadata_extractor.GetInputTensorCount() != 1) {
return CreateStatusWithPayload(
StatusCode::kInvalidArgument,
"Models are assumed to have a single input TensorMetadata.",
TfLiteSupportStatus::kInvalidNumInputTensorsError);
} else if (!is_input && metadata_extractor.GetOutputTensorCount() != 1) {
return CreateStatusWithPayload(
StatusCode::kInvalidArgument,
"Models are assumed to have a single output TensorMetadata.",
TfLiteSupportStatus::kInvalidNumOutputTensorsError);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if you pass in TensorMetadata, it will be garnered that the number of tensor is 1 for the ImagePostprocessor.

@jonpsy
Copy link
Contributor Author

jonpsy commented Dec 19, 2021

Made substantial changes. To summarize:

a) I'm storing output_tensor_ in Postprocessor class. So that Postprocess() can access the corresponding output tensor and also find tensor_type.
b) If output metadata doesn't have norm params, fallback to input metadata in Init(). This check is done via FindFirstNormProcessingUnit.

TODO:
Currently, we pass first input tensor into BuildImageTensorSpecs, instead we should get the input_index similar to what was done here. That is out of the scope of this PR, we can create a follow up.

Also need to work on some docs, any suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants