You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
More customization for interpolation and (especially) extrapolation would be nice to have. Interpolation here refers to how the values of projected pixels that fall into the transformed content bounds are calculated, and extrapolation refers to those that fall outside the projected content bounds. For example, rotating an image will often result in regions that need to be filled in (extrapolated).
Documentation of the current behavior:
extrapolation:
for Images: set to 0 (warp(..., zero(T)) see image.jl:80)
for Mask[Multi]s: flat extrapolation, i.e. value of the boundary (see mask.jl:146). This avoids creating invalid class values.
interpolation:
for Images: Linear interpolation
for Masks: nearest neighbor interpolation. . This avoids creating invalid class values.
To make this configurable, one could add interpolation methods to all ProjectiveTransforms. There are two problems with this: (1) as seen above, different items should support different behaviors and one would have to be able to pass a method for every item type, and (2) composing transforms with different interpolation methods would be a hassle.
Instead, I think interpolation conditions should be added to the Item types. This could allow using a custom boundary condition by specifiying Image((100, 100), interpolate=BSpline(Linear()), extrapolate=Reflect()).
The default behavior should also change: like in fast.ai, images should have Reflect extrapolation and cubic interpolation by default. Masks should also have Reflect extrapolation to match the images.
Changing the defaults would be considered a breaking change.
The text was updated successfully, but these errors were encountered:
More customization for interpolation and (especially) extrapolation would be nice to have. Interpolation here refers to how the values of projected pixels that fall into the transformed content bounds are calculated, and extrapolation refers to those that fall outside the projected content bounds. For example, rotating an image will often result in regions that need to be filled in (extrapolated).
Documentation of the current behavior:
Image
s: set to 0 (warp(..., zero(T))
seeimage.jl:80
)Mask[Multi]
s: flat extrapolation, i.e. value of the boundary (seemask.jl:146
). This avoids creating invalid class values.Image
s:Linear
interpolationMask
s: nearest neighbor interpolation. . This avoids creating invalid class values.To make this configurable, one could add interpolation methods to all
ProjectiveTransform
s. There are two problems with this: (1) as seen above, different items should support different behaviors and one would have to be able to pass a method for every item type, and (2) composing transforms with different interpolation methods would be a hassle.Instead, I think interpolation conditions should be added to the
Item
types. This could allow using a custom boundary condition by specifiyingImage((100, 100), interpolate=BSpline(Linear()), extrapolate=Reflect())
.The default behavior should also change: like in fast.ai, images should have
Reflect
extrapolation and cubic interpolation by default.Mask
s should also haveReflect
extrapolation to match the images.Changing the defaults would be considered a breaking change.
The text was updated successfully, but these errors were encountered: