Skip to content

Releases: NVIDIA/cutlass

CUTLASS 4.2.1

24 Sep 05:23
f3fde58

Choose a tag to compare

CuTe DSL

  • Bug fixings and improvements
    • Fixed an issue when running DSL codes with cuda-python 13.0
    • Fixed an issue when running inductor with DSL codes
    • Fixed an issue with unexpected logging when running DSL codes in FlashInfer
    • Fixed the issue reported in #2647
    • Fixed an issue when conditional define of variables outside of dynamic control flow

CUTLASS C++

  • Bypass EVT for nosmem blockwise kernels on Blackwell.
  • Rename cutlass/python/cutlass directory to cutlass/python/cutlass_cppgen.

CUTLASS 4.2.0

18 Sep 03:32

Choose a tag to compare

CuTe DSL

  • More Python versions are now supported for both x86-64 and aarch64, including
    • Python 3.10, 3.11, 3.12, and 3.13
  • Added new example and updated notebook to get started with CuTe DSL
  • API updates
  • Bug fixings and improvements
    • Fixed cute.print_tensor for coordinate tensor
    • Fixed cute.print for tuple of layouts
    • Fixed frozen object is not properly updated after fully assigned in dynamic control flow
    • Fixed assign tuple/list element in a dynamic control flow may cause compilation failure
    • Improved error message when CUDA context is not initialized
    • Improved docstring of congruent and weakly_congruent

CUTLASS C++

  • Support for Blackwell SM103 kernels for B300 GPUs.
  • Set of examples that demonstrate the usage of the 3.x API for targeting Blackwell SM103 architecture:
  • Set of unit tests that demonstrate the usage of Blackwell SM103 blockscaled GEMM
  • Support for Blackwell SM121 kernels for DGX Spark GPUs.
    • Share the major codes with Blackwell SM120 kernels.
  • Add support for heuristics-based kernel filtering and autotuning using nvidia-matmul-heuristics to find the best kernels for a given scenario.
  • Further enhance Blackwell SM100 Attention kernels in example 77.
    • Add fused reduction kernel support for cutlass MLA.
    • Add softmax skip correction.
    • Support for GQA in FMHA backward kernel.
    • Fix an issue where get_unmasked_trip_count may return a negative value.
    • Fix an issue where mbarriers are initialized with a zero arrival count.
    • Fix a corner case issue where the sequence length of q is not a multiple of tile_q.
    • Remove tma padding for forward kernel inputs.
  • Add Blackwell SM100 kernels for MoEs (focusing on Low-Latency inference performance): example 92. It uses TMA (for weights) and CPASYNC (for tokens) to load input matrices and allow only one problem dimension to vary across groups/experts, unlike general Grouped GEMMs. Note: further API simplifications and kernel improvements are upcoming. Any feedback on API is welcome.
  • Further enhance blockwise and groupwise GEMMs on Hopper and Blackwell
    • On Blackwell SM120, a blockwise gemm kernel is added: example 87.
    • On Hopper, add K major scale factor support for SM90 blockwise kernels.
    • On Hopper, relax the restriction that the k dimension of the problem size has to be the multiple of the k dimension of the tile size.
    • On Hopper, grouped version supports the case when k = 0.
  • Support for Blackwell SM100 fp4 gemv kernels.
  • Support for Blackwell SM100 legacy mixed input GEMM kernels.
  • Support for Blackwell SM100 cpasync kernel.
  • Support Blackwell SM120 mixed input blockscaled grouped GEMM.
  • Instantiating more Blackwell kernels in profiler.
    • Blackwell SM100 and SM103 kernels support CUTLASS_LIBRARY_INSTANTIATION_LEVEL to instantiate all possible combinations.
    • To use this feature, CUTLASS_LIBRARY_KERNELS must be non-empty. Profiler will combine CUTLASS_LIBRARY_KERNELS and CUTLASS_LIBRARY_INSTANTIATION_LEVEL to instantiate specific kernels.
    • Details please check Profiler Doc.
  • Fix some profiler issues:
    • Modify default cluster callback values to none 0 to avoid profiler failure when these values are not set in command line.
    • Fix some no output and timeout issues.
    • Fix Pingpong Blockwise Hopper library generation.
  • From CUDA 13.0, the Blackwell SM101 for Thor GPUs is renamed to SM110.
    • For CUDA toolkit version < 13.0, SM101 is still used for Thor GPUs.
    • For CUDA toolkit version >= 13.0, SM110 is used for Thor GPUs and SM101 is no longer valid.
  • Rename legacy Python API package from cutlass to cutlass_cppgen and add Blackwell EVT support to legacy Python interface.
    • Restructuring the C++ Blackwell SM100 Collective Epilogue Builder to work with the Python interface's EpilogueDescriptors.
    • Added Blackwell SM100 EVT Emitter on the Python side and routed most emission through Hopper SM90 Emitter.
    • Added some support for running SM100 kernels via the Python interface.
  • CuTe changes:
    • Fix inaccurate GridDim calculation under CuTe tutorial.
    • Add movmatrix support.
    • Fix smallest MMA-N allowed for Blackwell fp8 and fp16 gemm kernels.
    • Support fp16 accmulator for sm89 fp8 mma.
    • Shorten nullspace implementation.
    • Isolate and comment on cosize hacks.
    • Important documentation correction: E<0,1> == 1@0@1.
  • Fix some kernel issues:
    • Fix Hopper SM90 group gemm kernel to only use the commit group and wait group instead of also waiting on mbarriers.
    • Fix a tiny bug when K is large for Blackwell SM103 fp4 grouped GEMM kernel.
  • Add following unit tests:
  • Various improvements and fixes from the community and CUTLASS team. Thanks to everyone who submitted PRs!
  • Optimal code generation with CUDA toolkit versions 13.0U1.

CUTLASS 4.1.0

28 Jul 03:57
e51efbf

Choose a tag to compare

CuTe DSL

CUTLASS C++

  • Further enhance Blackwell SM100 Attention kernels in example 77.
    • Add variable sequence length support for FMHA Backward kernel.
    • Add varlen test support to Backward runner.
    • Codes support empty batch sequences.
  • Replace subbyte_iterator with cute::recast_ptr when constructing logical iterators/arrays.
  • CuTe changes:
    • Rewrite ArithTuple and ScaledBasis for robustness and clarity.
    • Remove buggy and kludgy get_layoutA|B|C_MN and friends from Atoms/TiledX.
    • Factor out print_latex and friends and rewrite.
    • Factor out print_svg and friends and rewrite.
  • Support Blackwell SM100 SIMT packed fp32x2 kernels.
  • Support residual add for implicit gemm kernels.
  • Various fixes for CUTLASS C++ Python interface's EVT tracer:
    • Add verifier for sm90 to report the invalid input.
    • When adding an edge to the graph, if the edge already exists, add an identity compute node to avoid having multiple parallel edges.
    • Register operations of tanh, sigmoid, exp, gelu to the python ast frontend.
    • Replace the NotImplemented Error by packing all nodes into a single topological visitor node as a fallback.
  • Fix profiler bugs in exhaustive perf search.
    • Fix incorrect cluster shape output issue when doing exhaustive search.
    • Fix a bug in profiler grouped GEMM for setting tile scheduler swizzles, cluster shapes, and raster orders.
  • Fix some profiler issues.
    • Complete the reference for Blackwell blockwise gemm kernels.
    • Fix incorrect regex logic for L1 test.

CUTLASS 4.0.0

27 Jun 14:17
b995f93

Choose a tag to compare

CuTe DSL

CuTe DSL is a Python DSL centered around CuTe's abstractions

CUTLASS C++

CUTLASS 3.9.2

04 May 04:25
ad7b2f5

Choose a tag to compare

  • Fixed Blockwise and Groupwise GEMM hang issue when problem size K is 128.
  • Optimal code generation with CUDA toolkit versions 12.9.

CUTLASS 3.9.1

01 May 04:29
f535c33

Choose a tag to compare

  • Fixed Group Gemm hang issue in CUTLASS 3.x
  • Improved Hopper Blockwise and Groupwise GEMM performance.

CUTLASS 3.9.0

25 Apr 01:53
e94e888

Choose a tag to compare

CUTLASS 3.8.0

21 Feb 05:32
afa1772

Choose a tag to compare

CUTLASS 3.8 is the first release that supports the NVIDIA Blackwell SM100 architecture.
For a background on Blackwell's new features, please consult the PTX documentation for CUDA 12.8.

Note: CUTLASS 3.x builds are known to be down on Windows platforms for all CUDA toolkits.
CUTLASS team is working on a fix.

CUTLASS 3.7.0

18 Jan 15:07
b78588d

Choose a tag to compare

  • A new Hopper blockwise scaling FP8 GEMM where the operands and block scaling tensor are staged via shared memory.
  • Distributed GEMM is an experimental pipelined Tensor Parallelism implementation utilizing existing CUTLASS kernels and CUDA runtime features, which can hide the most of communication behind computation.
  • Improved persistent grid launch for Hopper kernels with large cluster sizes (>= size of 4) using the new make_kernel_hardware_info API as shown in example 48.
  • Enabled high precision accumulation for Hopper FP8 Sparse GEMM.

CUTLASS 3.6.0

25 Dec 22:19
bf9da7b

Choose a tag to compare