This is a package to find imprecisions in chains of arithmetic functions.
This package provides a new type, PrecisionCarrier{T}, which holds both a standard floating point type, and an arbitrary precision BigFloat. Basic arithmetic, trigonometric, and comparison functions are overloaded for this type and always work on both the basic and the arbitrary precision type. When some computations have been done, the values may (or may not) diverge, and the extent of the accumulated precision loss can be evaluated.
This is not a package to directly increase the precision of your calculations. It is only intended to find the issues. To improve precision, you can use higher precision types, use arbitrary precision packages, or rearrange terms to be more numerically stable.
A floating point number can simply be wrapped in the custom type by calling precify on it. This also works for tuples and arrays, and custom types if an implementation is provided for it. The resulting PrecisionCarrier object can then be used like any AbstractFloat type in most cases. Finally, with significant_digits, the number of remaining significant digits in the variable can be queried.
using PrecisionCarriers
# example function from Prof. Kahan: https://people.eecs.berkeley.edu/~wkahan/WrongR.pdf
function unstable(x, N)
y = abs(x)
for i in 1:N y = sqrt(y) end
w = y
for i in 1:N w = w^2 end
return w
end
p = precify(1.5)
significant_digits(p) # -> 15.65...
p = unstable(p, 20)
significant_digits(p) # -> 10.39...
# reset the precision carrier
reset_eps!(p)This method, while helpful in many cases, is not universal and should be used with care:
- Some iterative methods (for a simple example, Newton's method) are not very reliant on high precision in every step, since they converge regardless of the precision of intermediate results. This can lead to something that looks like horrible precision loss, but is not actually relevant.
- The given number of epsilons is not the same as an error of measurement. It should not be used for error bars or similar. It's rather a rough indicator of numerical noise, but for example it can statistically happen that imprecisions cancel each other for certain cases, but this does not indicate better stability.
- Even arbitrary precision has its limits. For the
unstablefunction given above, at aboutN=256, even theBigFloatwill become unstable and the program will incorrectly report perfect precision (because both the normal float and the big float are equally wrong). However, this should only happen in extreme cases where you are likely aware of this. A similar problem can occur for example when a subtraction should result in exactly 0, where in some cases, the basic float type correctly reports 0.0, but theBigFloatcalculates some tiny number (like 1e-80). This leads to thePrecisionCarrierreportingε=Inf, because the relative error between 0 and not 0 is always infinite. BigFloatis not usable on GPUs.- The use of arbitrary precision adds considerable performance overhead.
MIT © Anton Reinhard
This work was partly funded by the Center for Advanced Systems Understanding (CASUS) that is financed by Germany’s Federal Ministry of Research, Technology and Space (BMFTR) and by the Saxon Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament.