Vectors working on GPU #2035
Replies: 2 comments
-
What is this proposal for more in detail? What do you want for c3c / c3 to do? |
Beta Was this translation helpful? Give feedback.
-
I think it would be amazing to extend the functions that we have on vectors to work using the GPU, using the same API and type. For example, given this code:
Right now this would use SIMD by default, which is pretty cool. But given a sufficiently large set of values, it would be even better to use the GPU for the calculation. One area that could use this a lot is machine learning, but i can see uses in simulations and even games. I don't know how feasible it is, but something like this:
This would dispatch the calculation to GPU, with minimal changes. I know llvm has some support for GPUs, but this needs to be investigated. It's definitely not a MUST have, but would be awesome. I had a project about smoke simulation using openGL, writing a compute shader for it was not particularly easy, having operations like this already dispatched to GPU would be huge. |
Beta Was this translation helpful? Give feedback.
-
Basic idea is for the vectors to also support GPU.
The current implementation allows for SIMD which is already pretty good. Given that the most important operations on any ML algorithms are on arrays, this new feature would be a great selling point for the whole area. This seems to me like a natural fit on the functionality of vectors and it would benefit various other applications besides ML.
LLVM seems to be expanding support for targeting GPUs: https://mlir.llvm.org/docs/Dialects/GPU/. Other projects like Triton use llvm as well. We would not need the whole set of features however, since we are not taking arbitrary code and compiling to GPU, only the vector type would be able to access this functionality.
Beta Was this translation helpful? Give feedback.
All reactions