Description
Hi,
Currently, the comparison operators defined for AbstractIrrational
v.s. AbstractFloat
is causing problems for GPUs:
The precision of AbstractIrrational
is currently matched by invoking Float(x, RoundUp/Down)
by default:
Lines 93 to 104 in 6e2e6d0
This internally calls setprecision(BigFloat, p)
:
Lines 68 to 72 in 6e2e6d0
And this depends on libmpfr
, which is not supported on the GPU.
This implementation has been causing problems downstream
These issues shouldn't happen when a certain AbstractIrrational
's conversion is defined statically by specializing Float(BigFloat)
.
To fix this, we need to change the behavior of the comparison operators to check whether a specialization Float(BigFloat)
exist, and then try to do dynamic precision adjustment.