You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use the "vit_h" model to infer anything images, once the image size is too large, such as 1500*2000, I get an error saying it exceeds INT_MAX. However, when I reduce the image size, the error no longer occurs. How can I solve this issue?
Looking forward to the author's response, thank you!
The text was updated successfully, but these errors were encountered:
Now, the error you report can potentially be helped by turning off this optimization and update the code to use mask_to_rle_pytorch instead of mask_to_rle_pytorch_2. If this resolves the issue, I can add a flag to disable this at runtime. But I think even if it did work, I'm surprised you get this far since the model itself seems to be limited to 1024x1024?
When I use the "vit_h" model to infer anything images, once the image size is too large, such as 1500*2000, I get an error saying it exceeds INT_MAX. However, when I reduce the image size, the error no longer occurs. How can I solve this issue?
Looking forward to the author's response, thank you!
The text was updated successfully, but these errors were encountered: