-
Notifications
You must be signed in to change notification settings - Fork 837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INT8 quantization for "surya" model #1977
INT8 quantization for "surya" model #1977
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
View / edit / reply to this conversation on ReviewNB aleksandr-mokrov commented on 2024-04-30T10:01:15Z Line #5. predictions = batch_text_detection([image], int8_ov_model_wrapper, processor) Could you add a selector which model to use, int8 or fp16? as-suvorov commented on 2024-04-30T15:24:28Z added |
added View entire conversation on ReviewNB |
View / edit / reply to this conversation on ReviewNB eaidova commented on 2024-05-01T14:00:44Z Line #24. logits = self.ov_model(kwargs)[logits_out] there is no need to use logits_out = self.ov_model.output(0), you can simplify just using
logits = self.ov_model(kwargs)[0] |
@as-suvorov please fix code style |
Ticket: 132425