Skip to content

Commit 48c404a

Browse files
committed
Add examples of calling FlashInfer from JAX via jax-tvm-ffi
minor update Fix lint issues and update gemma generation Address gemini comments Restructure to ensure subprocess is loaded after the potential kernel restart Fix stale docstring Cast rope_theta to float Fix CUDA version Fix eos_token_id Reorder dependencies Check the first sampled token before decoding more Fix gemma summary minor fix
1 parent ce02358 commit 48c404a

6 files changed

Lines changed: 3552 additions & 0 deletions

File tree

examples/README.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# FlashInfer Examples
2+
3+
This directory contains standalone examples demonstrating how to use FlashInfer in different settings and with various frameworks.
4+
5+
The goal of these examples is to provide minimal, runnable code that illustrates key integration patterns, performance considerations, and advanced usage scenarios.
6+
7+
## Available Examples
8+
9+
### JAX + TVM FFI Integration (`jax_tvm_ffi/`)
10+
11+
This example demonstrates how to integrate FlashInfer with JAX via a custom TVM-based Foreign Function Interface (FFI).
12+
13+
It covers:
14+
15+
* Calling FlashInfer CUDA kernels from JAX
16+
* Using TVM to bridge Python and low-level kernels
17+
* Building a minimal end-to-end pipeline for experimentation
18+
19+
This example is intended for:
20+
21+
* Users interested in extending FlashInfer beyond PyTorch
22+
* Researchers experimenting with JAX-based workflows
23+
* Developers exploring custom kernel integration via TVM
24+
25+
See [`jax_tvm_ffi/README.md`](./jax_tvm_ffi/README.md) for detailed instructions and usage.
26+
27+
## Notes
28+
29+
* Examples are self-contained and may have additional dependencies.
30+
* They are not part of the core library API and may evolve independently.
31+
* Contributions of new examples are welcome.

examples/jax_tvm_ffi/README.md

Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,125 @@
1+
# FlashInfer on JAX: Notebooks and Scripts
2+
3+
Two tutorials that show how to use FlashInfer GPU kernels from JAX via the [jax-tvm-ffi](https://github.com/NVIDIA/jax-tvm-ffi) bridge.
4+
5+
| File | What it covers |
6+
|------|---------------|
7+
| `flashinfer_jax_tvm_ffi.ipynb` / `.py` | The three-step bridge pattern (build & load, register, call) with three kernels: `silu_and_mul`, `apply_rope`, single-request decode attention |
8+
| `gemma3_flashinfer_jax.ipynb` / `.py` | End-to-end Gemma 3 1B Instruct inference using FlashInfer kernels for prefill and decode |
9+
10+
Each tutorial is available as both a Jupyter notebook (with explanations) and a standalone Python script (for quick reading and running).
11+
12+
## Requirements
13+
14+
| Requirement | Details |
15+
|-------------|---------|
16+
| GPU | NVIDIA SM 7.5+ (Turing or later) |
17+
| CUDA | 12.6+ |
18+
| Python | 3.10+ |
19+
| Container (recommended) | [NVIDIA NGC JAX container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/jax) |
20+
21+
## Installation
22+
23+
Recommended (CUDA 13):
24+
25+
```bash
26+
# Core dependencies (both tutorials)
27+
pip install 'jax[cuda13]'
28+
pip install flashinfer-python -U jax-tvm-ffi==0.1.2 \
29+
--extra-index-url https://flashinfer.ai/whl/cu130/
30+
31+
# Additional dependencies (Gemma 3 tutorial only)
32+
pip install torch --index-url https://download.pytorch.org/whl/cpu
33+
pip install safetensors huggingface_hub transformers
34+
```
35+
36+
Replace `jax[cuda13]` with `jax[cuda12]` for CUDA 12.x.
37+
38+
Replace `cu130` with the appropriate variant for your [CUDA Toolkit version](https://developer.nvidia.com/cuda-toolkit-archive) (e.g., `cu126` for CUDA 12.6).
39+
40+
## Running
41+
42+
### Part 1: FlashInfer JAX TVM FFI bridge
43+
44+
As a notebook:
45+
46+
```bash
47+
jupyter lab flashinfer_jax_tvm_ffi.ipynb
48+
```
49+
50+
As a script:
51+
52+
```bash
53+
python flashinfer_jax_tvm_ffi.py
54+
```
55+
56+
The first run compiles three FlashInfer kernels (~30 s each). Subsequent runs use the cached `.so` files in `~/.cache/flashinfer/`.
57+
58+
### Part 2: Gemma 3 inference
59+
60+
Gemma 3 is a gated model. You must first:
61+
62+
1. Create a [Hugging Face](https://huggingface.co) account
63+
2. Accept the Gemma 3 licence at [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it)
64+
3. Authenticate using **one** of the methods below:
65+
66+
```bash
67+
# Option A: environment variable (good for containers and CI)
68+
export HF_TOKEN=hf_...
69+
70+
# Option B: persistent login (stores the token in ~/.cache/huggingface/token)
71+
pip install huggingface_hub
72+
huggingface-cli login
73+
```
74+
75+
Then run:
76+
77+
```bash
78+
# As a notebook
79+
jupyter lab gemma3_flashinfer_jax.ipynb
80+
81+
# As a script
82+
python gemma3_flashinfer_jax.py
83+
```
84+
85+
If neither method is detected, the script will prompt you to paste your token interactively.
86+
87+
The first run downloads ~2 GB of model weights and compiles six FlashInfer kernels (gelu_tanh, rope, local/global decode, local/global prefill). Both are cached after the first run.
88+
89+
## What you'll learn
90+
91+
**Part 1** teaches the three-step pattern that every FlashInfer kernel follows:
92+
93+
```
94+
Step 1 BUILD & LOAD jit_spec.build_and_load() -> tvm_ffi.Module
95+
Step 2 REGISTER jax_tvm_ffi.register_ffi_target(name, wrapper, arg_spec)
96+
Step 3 CALL jax.ffi.ffi_call(name, output_shapes)(*inputs, **scalar_attrs)
97+
```
98+
99+
Each example adds a new concept:
100+
101+
| Kernel | New concept |
102+
|--------|------------|
103+
| `silu_and_mul` | Minimal bridge: one input, one output, no argument reordering |
104+
| `apply_rope` | Multiple outputs; argument reordering between JAX and TVM conventions |
105+
| `single_decode` | Type-specialized JIT compilation; scratch buffers; optional-argument sentinels |
106+
107+
**Part 2** applies the same pattern to run Gemma 3 1B Instruct end-to-end, adding:
108+
109+
- `gelu_tanh_and_mul` (one-word change from `silu`)
110+
- QK-norm (per-head RMSNorm on Q and K, new in Gemma 3)
111+
- Dual RoPE theta (local layers use 10k, global layers use 1M)
112+
- Local vs global attention with sliding window
113+
- Prefill (parallel prompt processing) and decode (autoregressive generation)
114+
115+
## Troubleshooting
116+
117+
**`CUDA_HOME not found`** — Set it manually: `export CUDA_HOME=/usr/local/cuda`
118+
119+
**`FLASHINFER_CUDA_ARCH_LIST not set`** — The scripts auto-detect your GPU's compute capability. To override: `export FLASHINFER_CUDA_ARCH_LIST=8.6`
120+
121+
**Compilation errors** — Delete the cache and retry: `rm -rf ~/.cache/flashinfer/`
122+
123+
**HF token errors** — Verify your token works: `huggingface-cli whoami`
124+
125+
**GPU interconnect warnings** — Harmless NVML messages on systems without NVLink. Suppressed by `TF_CPP_MIN_LOG_LEVEL=2` (set automatically in the scripts).

0 commit comments

Comments
 (0)