You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[](https://travis-ci.org/ComputationalRadiationPhysics/alpaka)
The **alpaka** library is a header-only C++11 abstraction library for accelerator development.
8
11
@@ -15,7 +18,7 @@ There is no need to write special CUDA, OpenMP or custom threading code.
15
18
Accelerator back-ends can be mixed within a device stream.
16
19
The decision which accelerator back-end executes which kernel can be made at runtime.
17
20
18
-
The **alpaka** API is currently unstable (alpha state).
21
+
The **alpaka** API is currently unstable (beta state).
19
22
20
23
The abstraction used is very similar to the CUDA grid-blocks-threads division strategy.
21
24
Algorithms that should be parallelized have to be divided into a multi-dimensional grid consisting of small uniform work items.
@@ -36,7 +39,8 @@ Software License
36
39
Documentation
37
40
-------------
38
41
39
-
The source code documentation generated with [doxygen](http://www.doxygen.org) is available [here](http://computationalradiationphysics.github.io/alpaka/).
42
+
The [general documentation](doc/markdown/Index.md) is located within the `doc/markdown` subfolder of the repository.
43
+
The [source code documentation](http://computationalradiationphysics.github.io/alpaka/) is generated with [doxygen](http://www.doxygen.org).
40
44
41
45
42
46
Accelerator Back-ends
@@ -49,6 +53,8 @@ Accelerator Back-ends
49
53
|OpenMP 2.0+ threads|OpenMP 2.0+|Host CPU (multi core)|sequential|parallel (preemptive multitasking)|
50
54
|OpenMP 4.0+ (CPU)|OpenMP 4.0+|Host CPU (multi core)|parallel (undefined)|parallel (preemptive multitasking)|
51
55
| std::thread | std::thread |Host CPU (multi core)|sequential|parallel (preemptive multitasking)|
56
+
| Boost.Fiber | boost::fibers::fiber |Host CPU (single core)|sequential|parallel (cooperative multitasking)|
57
+
|TBB 2.2+ blocks|TBB 2.2+|Host CPU (multi core)|parallel (preemptive multitasking)|sequential (only 1 thread per block)|
52
58
|CUDA 7.0+|CUDA 7.0+|NVIDIA GPUs SM 2.0+|parallel (undefined)|parallel (lock-step within warps)|
53
59
54
60
@@ -57,14 +63,16 @@ Supported Compilers
57
63
58
64
This library uses C++11 (or newer when available).
The **alpaka** library itself just requires header-only libraries.
75
83
However some of the accelerator back-end implementations require different boost libraries to be built.
76
84
85
+
When an accelerator back-end using *Boost.Fiber* is enabled, boost 1.62+ is required.
86
+
`boost-fiber`, `boost-context` and all of its dependencies are required to be build in C++11 mode `./b2 cxxflags="-std=c++11"`.
87
+
77
88
When an accelerator back-end using *CUDA* is enabled, version *7.0* of the *CUDA SDK* is the minimum requirement.
78
-
*NOTE*: When using *CUDA* 7.0, the *CUDA accelerator back-end* can not be enabled together with the *std::thread accelerator back-end* or the *Boost.Fiber accelerator back-end* due to bugs in the nvcc compiler.
89
+
*NOTE*: When using nvcc as *CUDA* compiler, the *CUDA accelerator back-end* can not be enabled together with the *Boost.Fiber accelerator back-end* due to bugs in the nvcc compiler.
90
+
*NOTE*: When using clang as a native *CUDA* compiler, the *CUDA accelerator back-end* can not be enabled together with any *OpenMP accelerator back-end* because this combination is currently unsupported.
79
91
80
92
When an accelerator back-end using *OpenMP* is enabled, the compiler and the platform have to support the corresponding minimum *OpenMP* version.
81
93
94
+
When an accelerator back-end using *TBB* is enabled, the compiler and the platform have to support the corresponding minimum *TBB* version.
0 commit comments