Skip to content

Commit 3f337fa

Browse files
xuzhao9facebook-github-bot
authored andcommitted
Add pr test to run operators on h100 (#25)
Summary: Add PR test to run operator tests on H100 runner. Pull Request resolved: #25 Reviewed By: FindHao Differential Revision: D65139590 Pulled By: xuzhao9 fbshipit-source-id: 600d438ea2979f5ee50538932a0f9e7f39acb9b2
1 parent e253f19 commit 3f337fa

File tree

5 files changed

+35
-4
lines changed

5 files changed

+35
-4
lines changed

.github/workflows/docker.yaml

+1
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ jobs:
2121
build-push-docker:
2222
if: ${{ github.repository_owner == 'pytorch-labs' }}
2323
runs-on: 32-core-ubuntu
24+
environment: docker-s3-upload
2425
steps:
2526
- name: Checkout
2627
uses: actions/checkout@v3

.github/workflows/pr.yaml

+32
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
name: TritonBench PR Test
2+
on:
3+
pull_request:
4+
paths:
5+
- .ci/*
6+
- tritonbench/*
7+
- .github/workflows/pr.yaml
8+
9+
jobs:
10+
h100-pytorch-test:
11+
# Don't run on forked repos
12+
if: github.repository_owner == 'pytorch-labs'
13+
runs-on: [gcp-h100-runner]
14+
timeout-minutes: 240
15+
environment: docker-s3-upload
16+
env:
17+
CONDA_ENV: "pytorch"
18+
SETUP_SCRIPT: "/workspace/setup_instance.sh"
19+
steps:
20+
- name: Checkout Tritonbench
21+
uses: actions/checkout@v3
22+
with:
23+
# no need to checkout submodules recursively
24+
submodules: true
25+
- name: Tune Nvidia GPU
26+
run: |
27+
sudo nvidia-smi -pm 1
28+
sudo ldconfig
29+
nvidia-smi
30+
- name: Test Tritonbench operators
31+
run: |
32+
bash ./.ci/tritonbench/test-operators.sh

tritonbench/operators/ragged_attention/hstu.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
try:
66
# Internal Import
7-
from hammer.generative_recommenders.ops.triton.triton_ragged_hstu_attention import (
7+
from hammer.oss.generative_recommenders.ops.triton.triton_ragged_hstu_attention import (
88
_ragged_hstu_attn_fwd,
99
_ragged_hstu_attn_fwd_persistent,
1010
)

tritonbench/operators/sum/operator.py

-2
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@
44
import os
55
from typing import Callable, Generator, List, Optional, Tuple
66

7-
import matplotlib.pyplot as plt
8-
97
import torch
108
import triton
119
import triton.language as tl

tritonbench/utils/path_utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
from pathlib import Path
55

6-
REPO_PATH = Path(os.path.abspath(__file__)).parent.parent
6+
REPO_PATH = Path(os.path.abspath(__file__)).parent.parent.parent
77
SUBMODULE_PATH = REPO_PATH.joinpath("submodules")
88

99

0 commit comments

Comments
 (0)