cuDNN FE is the modern, open-source entry point to the NVIDIA cuDNN library and high performance open-source kernels. It provides a C++ header-only library and a Python interface to access the powerful cuDNN Graph API and open-source kernels.
We will begin open-sourcing kernels based on customer needs, with the goal to educate developers and enable them to customize as needed.
We are now shipping OSS kernels, allowing you to inspect, modify, and contribute to the core logic. Check out our latest implementations:
- GEMM + Amax: Optimized FP8 matrix multiplication with absolute maximum calculation.
- GEMM + SwiGLU: High-performance implementation of the SwiGLU activation fused with GEMM.
- Grouped GEMM + GLU: Unified grouped GEMM GLU API supporting dense and discrete MoE weight layouts.
- Grouped GEMM + dGLU: Unified grouped GEMM dGLU backward API supporting dense and discrete MoE weight layouts.
- Grouped GEMM + SwiGLU: SwiGLU activation fused with Grouped GEMM.
- Grouped GEMM + dSwiglu: dSwiglu activation fused with Grouped GEMM.
- Discrete Grouped GEMM + SwiGLU: Per-expert-pointer SwiGLU grouped GEMM for MoE workloads without weight packing.
- Discrete Grouped GEMM + dSwiGLU: Per-expert-pointer dSwiGLU backward grouped GEMM for MoE workloads without weight packing.
- Grouped GEMM + Quant: Legacy dense-only grouped GEMM quant API for MoE FC2/dFC1 workloads.
- Grouped GEMM + Quant (Unified): Unified grouped GEMM quant API with per-row gating for MoE FC2/dFC1 workloads.
- NSA: Native Sparse attention as described in the Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention.
- SDPA Backward: SM100, D=256: SDPA Backward pass for D=256 on SM100.
- cudnn SDPA Fprop: Open sourcing the Hopper and Blackwell fprop kernels with stats.
- Fused RMSNorm + SiLU: Implementation of a fused kernel of RMS normalization followed by SiLU (Swish) activation.
- SDPA PyTorch Op: PyTorch custom operator for cuDNN-accelerated Scaled Dot-Product Attention with autograd and
torch.compilesupport.
- Unified Graph API: Create reusable, persistent
cudnn_frontend::graph::Graphobjects to describe complex subgraphs. - Ease of Use: Simplified C++ and Python bindings (via
pybind11) that abstract away the boilerplate of the backend API. - Performance: Built-in autotuning and support for the latest NVIDIA GPU architectures.
The easiest way to get started is via pip:
pip install nvidia_cudnn_frontendRequirements:
- Python 3.8+
- NVIDIA driver and CUDA Toolkit
Since the C++ API is header-only, integration is seamless. Simply include the header in your compilation unit:
#include <cudnn_frontend.h>Ensure your include path points to the include/ directory of this repository.
If you want to build the Python bindings from source or run the C++ samples:
1. Dependencies
python-dev(e.g.,apt-get install python-dev)- Dependencies listed in
requirements.txt(pip install -r requirements.txt)
2. Python Source Build
pip install -v git+https://github.com/NVIDIA/cudnn-frontend.gitEnvironment variables CUDAToolkit_ROOT and CUDNN_PATH can be used to override default paths.
3. C++ Samples Build
mkdir build && cd build
cmake -DCUDNN_PATH=/path/to/cudnn -DCUDAToolkit_ROOT=/path/to/cuda ../
cmake --build . -j16
./bin/samples- Developer Guide: Official NVIDIA Documentation
- C++ Samples: See
samples/cppfor comprehensive usage examples. - Python Samples: See
samples/pythonfor pythonic implementations.
We strictly welcome contributions! Whether you are fixing a bug, improving documentation, or optimizing one of our new OSS kernels, your help makes cuDNN better for everyone.
- Check the Contribution Guide for details.
- Fork the repo and create your branch.
- Submit a Pull Request.
To view the execution flow and debug issues, you can enable logging via environment variables:
# Log to stdout
export CUDNN_FRONTEND_LOG_INFO=1
export CUDNN_FRONTEND_LOG_FILE=stdout
# Log to a file
export CUDNN_FRONTEND_LOG_INFO=1
export CUDNN_FRONTEND_LOG_FILE=execution_log.txtLogging Levels:
CUDNN_FRONTEND_LOG_INFO=0: No loggingCUDNN_FRONTEND_LOG_INFO=1: Full logging with tensor dumpsCUDNN_FRONTEND_LOG_INFO=10: Basic logging (safe for CUDA graph capture)
Alternatively, you can control logging programmatically via cudnn_frontend::isLoggingEnabled().
This project is licensed under the MIT License.

