-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overflow checked arithmetics #41
Comments
Agreed that this can be an annoyance, but the alternative is annoying, too. Operations with FixedPointNumbers used to return Let's consider alternatives. With images, one approach should be |
The problem is, usually you don't even know you should take care of the overflow arithmetics when dealing with images (recall
gives you implicit widening
Unfortunately overflow pops in quickly as soon as you try to subtract pixelvalue from an other pixelvalue (both UFixed) and one forgets to convert the operands to a float. I think that the only reason the FixedPointNumbers arithmetic gets ever called is just as a mistake in the code and the correct function that one wanted to call was the floating-point arithmetic. |
Actually I'd prefer the same behavior as the rational numbers (throw OverflowError, not implicitly widen):
|
A simpler example: julia> x = UFixed8(0.5)
UFixed8(0.502)
julia> x*1
0.5019607843137255
julia> x*one(UFixed8)
UFixed8(0.502) It is a little disturbing that multiplying by One of the points I'd raise, though, is that this has very little to do with In other words, you're really asking julia to switch to checked arithmetic for all operations, or go back to the old way of promoting all integer arithmetic to the machine width ( For the record, from Matlab you get this:
|
...and lest you think that's better, consider:
which is a disaster. Compare julia: julia> x = UInt8[1,250]
2-element Array{UInt8,1}:
0x01
0xfa
julia> (x+x)-x
2-element Array{UInt8,1}:
0x01
0xfa which is at least good mathematics. |
@phlavenk, all this is to say it's not an easy problem, and that all choices have negatives. It would be lovely to find a nice syntax for "safe" arithmetic operations (i.e., ones that promote). Some experiments (julia-0.5): julia> a = 0xff
0xff
julia> b = 0xf0
0xf0
julia> a + b
0xef
julia> safeplus(x::UInt8, y::UInt8) = UInt16(x)+UInt16(y)
safeplus (generic function with 1 method)
julia> safe(::typeof(+)) = safeplus
safe (generic function with 1 method)
julia> a safe(+) b
ERROR: syntax: extra token "safe" after end of expression
in eval(::Module, ::Any) at ./boot.jl:237
julia> safe(+)(a, b)
0x01ef Works, but it's not particularly nice. Any ideas? |
The reason in Base that it is safe to e.g. have an operation on two For Here we are sitting in between... we are dealing with "real" numbers but with a more efficient representation than floating point for our given problem, but there is no Unlike the solution that |
I have more than a little sympathy for this viewpoint, but also some experience with its negatives. In old versions of Julia, One thing you can't do is "minimal" widening, i.e., |
Good points. It's definitely a tricky one. I note that some of these things really need fixing, outside of this package, like initializing accumulators, but these are also very tricky problems. |
Is there a way to at least use a macro that would widen all variables in the expression of the N0fx types to F32f32 type? I must admit the learning curve for metaprogramming is too steep for me and I treat it as kinf of black-magic art. Therefore I'm asking here for help. For a trivial example like doing average (sure, I know how to write it correctly for trivial cases like this): function avr(im_arr, blackim_arr)
avr_im = similar(im_arr[1])
tmp = @widen similar(im_arr[1]) #here tmp will be array of e.g. Fixed{Int64,8}
for (i, im) in enumerate(im_arr)
tmp .+= @widen im .- blackim_arr[i] #some "fancy math" on pixels where underflow/overflow matters
end
avr_im = round(eltype(im_arr[1]), tmp / n )
end the widen macro will transform into widening all the Normed{UIntxx, xx} into a type with best arithmetic performace and enough bits for sign and the value - I asume Int64. For now, unfortunatelly, to ensure the correctness, I convert all input images into Float64 and do all operations with them. The bad is the speed and 8-time higher memory consumption. This is a problem when processing large 8/16 bit images. I like the idea behind Images.jl a lot where image algorithms work on normalized black/white regardless of representation. I'd be glad to use the dot fusion with implicit widening and store the results again as 8/16 bit image. This is usually sufficient precision. The only problem that I'm facing is the correctness of the implementation - simple with Float64, tricky with overflow/underflow gotchas in the 8 bit integer arithmetics, and make a mistake there is pretty easy. |
ImageCore includes functions like My general strategy for writing code like what you have above is to
For example: function avr(im_arr, blackim_arr)
Tout = typeof(first(im_arr[1]) - first(blackim_arr[1]))
avr!(similar(im_arr[1], Tout), im_arr, blackim_arr)
end
function avr!(out, im_arr, blackim_arr)
z = zero(accumtype(img_arr..., blackim_arr...)))
for I in eachindex(out)
tmp = z
for i = 1:length(im_arr)
tmp += oftype(z, im_arr[i][I])) - oftype(z, blackim_arr[i][I])
end
out[I] = tmp # this will throw an InexactError if there's overflow that eltype(out) can't handle
end
out
end This has several advantages over the version you wrote above, including not creating temporary arrays at each stage. The tricky part is """
accumtype(arrays...) -> T
Compute a type safe for accumulating arithmetic on elements of the given `arrays`. For "small integer" element types, type `T` will be wider so as to avoid overflow.
"""
accumtype(imgs::AbstractArray...) = _accumtype(Union{}, imgs...)
using FixedPointNumbers: floattype
_accumtype{Tout,T<:FixedPoint}(::Type{Tout}, img::AbstractArray{T}, imgs...) = _accumtype(promote_type(Tout, floattype(T)), imgs...)
_accumtype{Tout,T<:Base.WidenReduceResult}(::Type{Tout}, img::AbstractArray{T}, imgs...) = _accumtype(promote_type(Tout, widen(T)), imgs...)
_accumtype{Tout,T}(::Type{Tout}, img::AbstractArray{T}, imgs...) = _accumtype(promote_type(Tout, T), imgs...)
# This is called when we've exhausted all the inputs
_accumtype{Tout}(::Type{Tout}) = Tout This calls itself recursively so that it's type-stable. I wonder if we should put |
The pattern you suggest is highly type instable becouse of the unknown type of z during compilation. The slight modification of your code to using BenchmarkTools
n = 2000; a = [UFixed8.(rand(n,n)) for i=1:20]; b = [UFixed8.(rand(n,n)) for i=1:20];
function avr2!(out, im_arr, blackim_arr, z=zero(accumtype(im_arr..., blackim_arr...)))
for I in eachindex(out)
tmp = z
for i = 1:length(im_arr)
tmp += oftype(z, im_arr[i][I]) - oftype(z, blackim_arr[i][I])
end
out[I] = tmp # this will throw an InexactError if there's overflow that eltype(out) can't handle
end
out
end
out = zeros(Float64, size(a[1])...)
@benchmark avr2!(out, a,b) gives 750ms / run, instead of 17.7s with accumtype determination inside the body. The fastest implementation is the following: w{T<:FixedPoint}(x::T) = Fixed{Int64, 8}(x.i, 0)
function avr2!(out, im_arr, blackim_arr)
for i = 1:length(im_arr)
for I in eachindex(out)
out[I] += w(im_arr[i][I]) - w(blackim_arr[i][I])
end
end
out
end
out = zeros(Fixed{Int64, 8}, size(a[1])...)
@benchmark avr2!(out, a,b) that gives 250ms for run. Widens to Fixed{Int64, 8} so still uses integer arithmetics and traverses contineously memory The most convenient way for me would be possibility to rewrite the inner loop line out[I] += w(im_arr[i][I]) - w(blackim_arr[i][I]) with @widen out[I] += im_arr[i][I] - blackim_arr[i][I] that would do the transformation for all fixedpoint numbers (expecting there will be allowed Normalized -> Fixed conversion. Not possible now)
This way we can reach speed, corectness and readability at the same time, all of them so important for image processing. |
If you were using tuples rather than arrays for Also, using a Nice! Yes, let's do some version of this. If you feel comfortable with it, please submit a PR, otherwise I will get to it eventually. |
To tackle this, I'm considering writing a package called CheckedArithmetic. I'd appreciate feedback about whether the strategies I'm planning to implement make sense, and whether they do as much as we reasonably can to solve the problem. TBH I am not sure how useful Here's the proposed README: CheckedArithmeticThis package aims to make it easier to detect overflow in numeric computations.
|
IMO, |
As a possible alternative to |
Can you give an example of how it would be used? For example, if we use |
Thanks for the reminder about ReferenceTests! Definitely a useful resource. |
What I want to say is in your past comments.
For example, the sum of pixel values of 4096 x 4096 I understand that I am asking for too much. However, if we just want to calculate with |
@timholy, I have another question about I think |
No, I was planning to make it a dependency of FixedPointNumbers. There is no way to have a common interface without having a common package. We could split the |
I think having a size-dependence would introduce an undesirable type-instability. Moreover, julia> function accum(s, A::AbstractArray)
@inbounds @simd for a in A
s += a
end
return s
end
accum (generic function with 1 method)
julia> A = rand(UInt8, 100);
julia> @btime accum(0, $A)
10.704 ns (0 allocations: 0 bytes)
12174
julia> @btime accum(UInt(0), $A)
9.551 ns (0 allocations: 0 bytes)
0x0000000000002f8e
julia> @btime accum(UInt16(0), $A)
12.597 ns (0 allocations: 0 bytes)
0x2f8e so I think we can just use "native" types for all accumulator operations. When adding FixedPoint types it seems to make sense to use a FixedPoint type, hence |
Is there any reason not to do the following? julia> function accum(s, A::AbstractArray)
@inbounds @simd for a in A
s += a
end
return Float64(s) # for example
end My concern is that the |
Now that CheckedArithmetic is registered, I should just push my branch, which will help clarify this discussion. |
See #146 |
Implementation status of checked arithmetic
Done. |
As discussed in https://github.com/JuliaLang/julia/issues/15690 with @timholy I've been recently bitten by the inconsistency regarding overflow in FixedPoint arithmetic when dealing with Images.jl.
FixedPointNumbers are pretending to be
<:Real
numbers, but their behavior regarding arithmetic is more likeInteger
. I just cannot think of any use-case one would get any advantage from the "modulo" arithmetic when dealing with the numbers guaranteed to fall within <0, 1> range. I'd be glad if my code stopped by an OverflowError exception indicating problem in my algorithm. Thus, easy fixable by manual widening of the operands. With the current silent overflow, algorithm just continues, giving finally wrong results.Before writing a PR to introduce arithmetic that throws on overflow or underflow, I'd like to know your opinion on this change. I was also thinking of using a global "flag" to dynamically opt-in for the overflow-checking behavior but i'm worried about the branching costs. Is it possible e.g. use traits for this behavior change?
The text was updated successfully, but these errors were encountered: