Skip to content

Conversation

@ChrisRackauckas-Claude
Copy link
Contributor

Summary

  • Adds missing maxthreads method for JLBackend in the JLArrays extension
  • This fixes EnsembleGPUArray compatibility with JLArrays

Problem

The JLArraysExt extension was missing the maxthreads method for JLBackend. When using EnsembleGPUArray with JLArrays, the code calls workgroupsize(backend, n) which in turn calls maxthreads(backend). Without this method defined for JLBackend, users got:

MethodError: no method matching maxthreads(::JLBackend)

Solution

Added DiffEqGPU.maxthreads(::JLBackend) = 256 to the JLArrays extension, consistent with all other backend extensions (CUDA, AMDGPU, Metal, oneAPI, OpenCL) which all define this method with value 256.

Testing

  • Verified that EnsembleGPUArray now works correctly with JLArrays
  • Verified that EnsembleGPUKernel continues to work with JLArrays
  • Ran the JLArrays test group - all tests pass except for one pre-existing flaky numerical tolerance test (0.00084 vs 0.0008 threshold)

Test Plan

  • Run tests with GROUP=JLArrays
  • Verify CI passes

cc @ChrisRackauckas

🤖 Generated with Claude Code

The JLArraysExt extension was missing the `maxthreads` method for
`JLBackend`, which caused `EnsembleGPUArray` to fail when used with
JLArrays because `workgroupsize` calls `maxthreads(backend)`.

This adds the `maxthreads` method consistent with other backend
extensions (CUDA, AMDGPU, Metal, oneAPI, OpenCL) which all define
`maxthreads(::Backend) = 256`.

This fixes JLArray interface compliance for EnsembleGPUArray.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@ChrisRackauckas ChrisRackauckas merged commit ddecabd into SciML:master Jan 9, 2026
4 of 25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants