Skip to content

Int128 support #974

Open
Open
@maleadt

Description

@maleadt

CUDA GPUs do not natively support Int128 operations. LLVM supports lowering code that works with Int128, https://reviews.llvm.org/rGb9fc48da832654a2b486adaa790ceaa6dba94455, but requires compiler intrinsics for many operations:

julia> using CUDA

julia> x = widen.(CuArray(rand(Int64, 10)))
10-element CuArray{Int128, 1}:
  ...

julia> (x, x)
ERROR: LLVM error: Undefined external symbol "__divti3"

With https://reviews.llvm.org/D34708, it should be possible to resolve those intrinsics in the current module, so we can just add them to our runtime library.

Metadata

Metadata

Assignees

No one assigned

    Labels

    cuda kernelsStuff about writing CUDA kernels.enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions