You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a philosophical question that the library needs to address at some point.
Examples to how to write "1000" using different precision in floating point math:
Expression
Significance
Precision
1000
unknown
unknown
1E3
1
-3
1000.
4
0
1.000E3
4
0
1000.0
5
1
I use the word "Significance" to mean "number of significant digits". I'm not a PhD mathematician so feel free to correct me.
Exponent = -Precision if the precision is negative
These all have the same scalar value, but different precision. Unfortunately, there is no precision stored in a Float64 per the IEEE 754 spec. This library makes a best guess by looking for zeroes. Is that correct? It's more of an opinion question rather than something that has an exact answer. This library is treating it as 1E3. Or "1000 to the nearest 1000".
To give an exact exponent, you can use NewFromFloatWithExponent instead. But this has limited use when dealing with unknown incoming values.
But, an "Integer" in computer-science-y terms is an EXACT number. So, 1000, as an int is exactly 1000, and all three zeroes are significant. Or, written in mathy terms, it is 1.000E3. IMO, the library is handling ints in the most correct way.
If the number is being passed as a string, there is PR to fix the interpretation: #296
code
output
The text was updated successfully, but these errors were encountered: