Skip to content

How to offload to Intel integrated GPU? #131

Open
@pauljurczak

Description

@pauljurczak

I hope it's okay to cross-post this question here (https://numba.discourse.group/t/numba-mlir-gpu-offload/1981).

I watched the presentation announced in https://numba.discourse.group/t/numba-meeting-june-6-2023-presentation-on-numba-mlir/1959/2 and installed Numba-MLIR with:

conda install numba-mlir -c dppy/label/dev -c intel -c conda-forge -c numba

A simple example:

from numba_mlir import njit
import numpy as np

@njit (parallel=True)
def foo(a, b):
    return a + b

result = foo(np.array([1,2,3]), np.array([4,5,6]))
print(result)

runs fine. There is no documentation yet and I don't know how to control where this script is run: CPU vs. GPU. What is the way to offload it to an Intel integrated GPU?

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions