Conversation
2a541d0 to
184b36f
Compare
|
One of the consequences here is that the NeuralAttentionlib.jl also relies on I also see that Bennu.jl uses |
| # WrappedArray from Adapt for Base wrappers. | ||
| backend(::Type{WA}) where WA<:WrappedArray = backend(unwrap_type(WA)) |
There was a problem hiding this comment.
@vchuravy Does KA.jl already support recursing into wrapped arrays for get_backend queries?
There was a problem hiding this comment.
Uhm I don't think so, but we can add that.
There was a problem hiding this comment.
I'm not entirely sure we want to; it would pull the whole Union mess of Adapt's WrappedArray into KA.jl. On the other hand, there's not much of an alternative right now if we want the ability to launch kernels on wrapped arrays...
| KernelAbstractions.isgpu(b::JLBackend) = false | ||
|
|
||
| function convert_to_cpu(obj::Kernel{JLBackend, W, N, F}) where {W, N, F} | ||
| return Kernel{typeof(KernelAbstractions.CPU(; static = obj.backend.static)), W, N, F}(KernelAbstractions.CPU(; static = obj.backend.static), obj.f) |
There was a problem hiding this comment.
Can you explain? I didn't get what this was for, and it seems unused?
There was a problem hiding this comment.
Unless I did something wrong, it's used for kernel configuration:
function (obj::Kernel{JLBackend})(args...; ndrange=nothing, workgroupsize=nothing)
device_args = jlconvert.(args)
new_obj = convert_to_cpu(obj)
new_obj(device_args...; ndrange, workgroupsize)
end
And will essentially transform anything that has a JLBackend into a CPU(static) (default false) for KA execution so we can use the GPU kernels on CPU Arrays. It's needed because JLBackend is within KernelAbstractions.GPU, but needs to actually call CPU kernels:
struct JLBackend <: KernelAbstractions.GPU
static::Bool
JLBackend(;static::Bool=false) = new(static)
end
6eaa774 to
2f8f813
Compare
d8fb6d3 to
7348bba
Compare
7348bba to
f418d7a
Compare
|
Let's do this! |
|
Woo! Great work! |
Continuation of #525. Separate PR because I reverted the
launch_configurationremoval that's on @leios' branch.