Skip to content

HenriqueAssumpcao/CUDA-Kernels

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CUDA kernels

This repository is an ongoing archive of CUDA kernels I've been implementing.

The kernels can be found in the include/kernel/ folder. The benchmarks can be found in the src/ folder. First, use make to compile, and then run the executables in the build/ directory.

Kernels in this repo:

  1. The include/gemm/ folder contains a naive, a blocktiled and a threadtiled implementations of GEneral Matrix Multiply (GEMM), in both FP32 and FP16 formats (SGEMM and HGEMM, respectively).
  2. The include/attn/ folder contains a kernel for the transpose operation, for the softmax operation, and for the flash attention forward pass, in both FP16 and FP32 formats.

Next steps:

  1. Benchmarking for attention mechanisms.
  2. Improve benchmarking for sgemm, and add benchmarking for hgemm.
  3. Further optimizations: double buffering, vectorized loads, tensor cores, etc.

My main reference so far has been this incredibly useful tutorial.

About

Implementations of kernels in CUDA

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published