Skip to content

Comparison operators with AbstractIrrational are GPU incompatible #51058

Open
@Red-Portal

Description

@Red-Portal

Hi,

Currently, the comparison operators defined for AbstractIrrational v.s. AbstractFloat is causing problems for GPUs:
The precision of AbstractIrrational is currently matched by invoking Float(x, RoundUp/Down) by default:

julia/base/irrationals.jl

Lines 93 to 104 in 6e2e6d0

<(x::AbstractIrrational, y::Float64) = Float64(x,RoundUp) <= y
<(x::Float64, y::AbstractIrrational) = x <= Float64(y,RoundDown)
<(x::AbstractIrrational, y::Float32) = Float32(x,RoundUp) <= y
<(x::Float32, y::AbstractIrrational) = x <= Float32(y,RoundDown)
<(x::AbstractIrrational, y::Float16) = Float32(x,RoundUp) <= y
<(x::Float16, y::AbstractIrrational) = x <= Float32(y,RoundDown)
<(x::AbstractIrrational, y::BigFloat) = setprecision(precision(y)+32) do
big(x) < y
end
<(x::BigFloat, y::AbstractIrrational) = setprecision(precision(x)+32) do
x < big(y)
end

This internally calls setprecision(BigFloat, p):

@assume_effects :total function (t::Type{T})(x::AbstractIrrational, r::RoundingMode) where T<:Union{Float32,Float64}
setprecision(BigFloat, 256) do
T(BigFloat(x)::BigFloat, r)
end
end

And this depends on libmpfr, which is not supported on the GPU.
This implementation has been causing problems downstream

These issues shouldn't happen when a certain AbstractIrrational's conversion is defined statically by specializing Float(BigFloat).
To fix this, we need to change the behavior of the comparison operators to check whether a specialization Float(BigFloat) exist, and then try to do dynamic precision adjustment.

Metadata

Metadata

Assignees

No one assigned

    Labels

    gpuAffects running Julia on a GPUmathsMathematical functions

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions