We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents 43f26f1 + ea6b56b commit d7471b9Copy full SHA for d7471b9
README.md
@@ -19,8 +19,6 @@ There is no need to write special CUDA, OpenMP or custom threading code.
19
Accelerator back-ends can be mixed within a device queue.
20
The decision which accelerator back-end executes which kernel can be made at runtime.
21
22
-The **alpaka** API is currently unstable (beta state).
23
-
24
The abstraction used is very similar to the CUDA grid-blocks-threads division strategy.
25
Algorithms that should be parallelized have to be divided into a multi-dimensional grid consisting of small uniform work items.
26
These functions are called kernels and are executed in parallel threads.
0 commit comments