Open
Description
I hope it's okay to cross-post this question here (https://numba.discourse.group/t/numba-mlir-gpu-offload/1981).
I watched the presentation announced in https://numba.discourse.group/t/numba-meeting-june-6-2023-presentation-on-numba-mlir/1959/2 and installed Numba-MLIR with:
conda install numba-mlir -c dppy/label/dev -c intel -c conda-forge -c numba
A simple example:
from numba_mlir import njit
import numpy as np
@njit (parallel=True)
def foo(a, b):
return a + b
result = foo(np.array([1,2,3]), np.array([4,5,6]))
print(result)
runs fine. There is no documentation yet and I don't know how to control where this script is run: CPU vs. GPU. What is the way to offload it to an Intel integrated GPU?
Metadata
Metadata
Assignees
Labels
No labels
Activity