Skip to content

[HIP] Add offload PGO tiled matmul E2E test#366

Open
yxsamliu wants to merge 1 commit intollvm:mainfrom
yxsamliu:amd/dev/yaxunl/pgo-tiled-matmul-test
Open

[HIP] Add offload PGO tiled matmul E2E test#366
yxsamliu wants to merge 1 commit intollvm:mainfrom
yxsamliu:amd/dev/yaxunl/pgo-tiled-matmul-test

Conversation

@yxsamliu
Copy link
Contributor

[HIP] Add offload PGO tiled matmul E2E test

Add a tiled matrix multiply kernel that demonstrates the offload PGO
workflow on AMDGPU. The kernel uses a large per-thread sub-tile
(configurable via -DTH_M and -DTH_N) with LDS-based cooperative tile
loading, creating natural register pressure that exceeds the VGPR
budget and causes spills. Boundary tile handling creates biased
branches that PGO can optimize by guiding the register allocator to
reduce spills on the hot path.

Sub-tile sizes are tunable per architecture to induce spills on GPUs
with different register file sizes.

Two tests are registered:

  • pgo-tiled-matmul: correctness test (compile + run + verify)
  • pgo-tiled-matmul-pipeline: full PGO pipeline test
    (baseline -> instrument -> collect -> merge -> PGO build -> compare)

The pipeline test verifies that the full -fprofile-generate /
-fprofile-use workflow completes successfully and reports the
performance difference for information.

Add a tiled matrix multiply kernel that demonstrates the offload PGO
workflow on AMDGPU. The kernel uses a large per-thread sub-tile
(configurable via -DTH_M and -DTH_N) with LDS-based cooperative tile
loading, creating natural register pressure that exceeds the VGPR
budget and causes spills. Boundary tile handling creates biased
branches that PGO can optimize by guiding the register allocator to
reduce spills on the hot path.

Sub-tile sizes are tunable per architecture to induce spills on GPUs
with different register file sizes.

Two tests are registered:
- pgo-tiled-matmul: correctness test (compile + run + verify)
- pgo-tiled-matmul-pipeline: full PGO pipeline test
  (baseline -> instrument -> collect -> merge -> PGO build -> compare)

The pipeline test verifies that the full -fprofile-generate /
-fprofile-use workflow completes successfully and reports the
performance difference for information.
@yxsamliu yxsamliu requested review from jmmartinez and jplehr March 10, 2026 14:03
size_t elems_B = (size_t)groups * K * N;
size_t elems_C = (size_t)groups * M * N;

float *h_A = (float*)malloc(elems_A * sizeof(float));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: Does it make sense to restrict ourselves to C apis instead of using std::vector and let it handle the memory on its own?


CLANG="@CMAKE_CXX_COMPILER@"
CLANG_DIR=$(dirname "$CLANG")
LLVM_PROFDATA="$CLANG_DIR/llvm-profdata"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it changes much, but we could use clang --print-prog-name=<prog>:

Suggested change
LLVM_PROFDATA="$CLANG_DIR/llvm-profdata"
LLVM_PROFDATA="$("$CLANG" --print-prog-name=llvm-profdata)"

Copy link
Contributor

@jmmartinez jmmartinez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me but I'll JP have the final word.

Copy link
Contributor

@jplehr jplehr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG
Just comments about the GPU arch of the bots.

Reporting the actual PGO speed-up just for information purpose is good.

// Sub-tile dimensions are configurable via -D flags to tune register
// pressure per GPU architecture:
// gfx1100 (256 VGPRs): default TH_M=16, TH_N=16 → 256 accumulators → spills
// gfx950 (512 VGPRs): -DTH_M=20 -DTH_N=16 → 320 accumulators → spills
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that the bot executes on gfx90a should we have a config here for that arch?

BENCH_RUNS=5

# Detect GPU architecture and set sub-tile size to induce register spills.
# gfx950 has 512 VGPRs (vs 256 on gfx1100), so needs larger sub-tiles.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bots run on gfx90a, does this need adjustment?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants