Skip to content

Releases: alpaka-group/alpaka

1.3.0 The Grooming Release

04 Jul 10:41
Compare
Choose a tag to compare

DOI

This is primarily a bugfix release with a focus on SYCL changes.
It addresses several SYCL-specific issues, including fixes to the parallel loop patterns for the Intel FPGA SYCL backend, corrections to the SYCL index order, and updates to template arguments of SYCL buffer specializations.
Additionally, the release includes changes to SYCL attributes and protections for __SYCL_TARGET macros.
Overall, this release aims to improve stability and performance, particularly for SYCL-related functionalities.

2.0.0 Concepts Ahead

25 Jun 08:11
Compare
Choose a tag to compare

DOI

This release brings significant advancements and essential bug fixes to enhance your computational projects.
This version introduces C++20 compatibility, enabling modern coding practices.
With Boost now an optional dependency, developers gain greater flexibility and simplified dependency management.
The CI environment has been enhanced, featuring updated MacOS runners and oneAPI 2025.0.
Critical bug fixes include improvements to the SYCL backend.
Additionally, fixes in CUDA/HIP mode and corrections to math functions ensure reliable performance and accurate results.

1.2.0 Easy Indexing Iterators

02 Oct 07:39
Compare
Choose a tag to compare

DOI

This release includes several enhancements, updates, and fixes:

  • Introduced a new index iterator to streamline iteration over kernel index domains. See include/alpaka/exec for more details.
  • Support was added for mapped memory allocation and device global variables in the SYCL backend.
  • Fixed the broken [get|is]ValidWorkDiv*() functions.
  • Please note, that this will be the final release to support C++17.

View the full changelog here.

1.1.0: One One Zero

18 Jan 13:56
Compare
Choose a tag to compare

DOI

This release features small additions, changes, and fixes, including:

Warp function support for Shuff-Up, Down, and Xor.
Named access to vector components via .x(), .y(), ....
CMake's native HIP support is used to improve the compatibility with future HIP updates.
CMake presets for alpaka backends simplify the integration into your favorite IDE.
alpaka-ls helps you to see all available backends in your system.
View the full changelog here.

1.0.0: The One Release

14 Nov 12:59
Compare
Choose a tag to compare

DOI

This release features countless small additions, changes and fixes, including:

  • A rework of the pitches, extents and offsets APIs.
  • Experimental support for std::mdspan.
  • Platforms now need to be instantiated.
  • Moving the SYCL backend use USM pointers, SYCL2020 and oneAPI.
  • Removal of OpenMP 5, OpenACC and Boost.Fiber backends.
  • And much more!

View the full changelog here.

0.9.0: The SYCL Complex

21 Apr 13:32
Compare
Choose a tag to compare

DOI

This release features multiple new major additions:

  • A new (experimental) SYCL back-end. This adds support for Intel oneAPI hardware targets (CPUs, GPUs, FPGAs) as well as AMD/Xilinx FPGAs.
  • Support for complex numbers.
  • The code base has been migrated to C++17. C++14 is no longer supported.

View the full changelog here.

0.8.0: Random Access Memories

20 Dec 12:20
Compare
Choose a tag to compare

DOI

This release features the new portable Philox-based random number generator. In addition there are many more smaller features and compatibility changes, the most notable being:

  • The kernel language now supports memory fences.
  • alpaka now has an experimental namespace in which we will try out unstable features. The first experimental feature is an abstraction for memory access called accessor.
  • We added support for clang 12, CUDA 11.4, GCC 11, clang as HIP compiler, XCode 12.5.1 and XCode 13.
  • clang < 5.0, CUDA < 9.2, GCC < 7.0, nvcc as HIP compiler, Visual Studio < 2019, Ubuntu 16.04, XCode < 11.3.1 and XCode < 12.4 are no longer supported.
  • Linking to the CMake alpaka::alpaka target no longer annoys users with alpaka-internal warnings.

View the full changelog here.

0.7.0: Maximum Warp

03 Aug 14:04
Compare
Choose a tag to compare

DOI

This release features the new alpaka intrinsic warp::shfl (and an accordingly updated cheat sheet). Apart from that we mostly focused on maintenance and convenience changes:

  • We removed support for 32bit Windows, Visual Studio versions older than 2019 and clang+CUDA for clang versions older than 9.
  • We now support clang 11 and CUDA 11.3.
  • We now mandate CMake 3.18 (or newer) so we can make use of CMake's native CUDA support.
  • A few CMake flags have been renamed.
  • The CUDA and HIP back-ends no longer enable -ffast-math by default.

See the full changelog here.

0.6.1: Fix CPU shared memory and rework OpenMP scheduler configuration

29 Jun 12:27
Compare
Choose a tag to compare

DOI

This release fixes various bugs and changes the interface for configuring the OpenMP scheduler.
A critical bug in the shared memory implementation for CPU backends is fixed, therefore version 0.6.0 should no longer be used.
An overview of all changes can be found in the Changelog.

0.6.0: New Backends and Useability Improvements

20 Jan 13:58
Compare
Choose a tag to compare

DOI

This release adds two new backends: OpenMP 5 target offload and OpenACC.

We improved useability of alpaka API by flattening the namespace hierarchies and making some renamings.
A full list of API changes is avaiable in the changelog.
New features include support for warp voting functions, setting schedule for OpenMP2Blocks backend and simplified interfaces for atomic functions.

For cmake-based builds, we no longer automatically enable all available backends.
Now a user has to explicitly enable the backends to be used.

The readthedocs documentation was extended with a cheatsheet and support for Doxygen.

This release is adding compatibility to the latest CUDA releases up to 11.2.
The HIP backend is now more stable and supports HIP 3.5+.
We recommend using the latest HIP version to benefit from its fast improvements.

We fixed many bugs and improved support for Intel C++ compiler.