Skip to content

Commit 79bd988

Browse files
Ankur-singhhshrivastava-droidcquil11
authored
Minimaxm2.5 nvfp4 b200 (#996)
* Add MiniMax-M2.5 NVFP4 vLLM benchmark config for B200 * Use vLLM nightly image and update benchmark script for MiniMax-M2.5 NVFP4 B200 * Update MiniMax FP4 B200: use v0.19.0-cu130 image, sync perf-changelog * Optimize search space and update perf-changelog for minimaxm2.5-fp4-b200-vllm Expand search space from tp2/tp4 to tp1/tp2/tp4/tp8 with expert parallel and dp-attn variants. Sync perf-changelog from origin/main and append entry. * Update minimaxm2.5 fp4 b200 benchmark script with dp-attn support Add DP_ATTENTION env var and refactor parallelism into PARALLEL_ARGS to support data-parallel attention mode. Also update server flags: gpu-memory-utilization 0.90, add max-cudagraph-capture-size and max-num-batched-tokens, remove block-size and FLASHINFER env export. * Add tp2/tp4 non-EP search-space entries for minimaxm2.5-fp4-b200-vllm Widen concurrency sweep with standalone tp2 and tp4 entries (conc 4-512) alongside existing EP variants for both 1k1k and 8k1k seq-len configs. * Update perf-changelog.yaml with new line --------- Co-authored-by: hshrivastava-droid <hshrivastava@nvidia.com> Co-authored-by: Cameron Quilici <cjquilici@gmail.com>
1 parent c7b1fe4 commit 79bd988

File tree

3 files changed

+117
-0
lines changed

3 files changed

+117
-0
lines changed

.github/configs/nvidia-master.yaml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3140,6 +3140,35 @@ minimaxm2.5-fp8-b200-vllm:
31403140
- { tp: 2, conc-start: 4, conc-end: 512 }
31413141
- { tp: 4, conc-start: 4, conc-end: 512 }
31423142

3143+
minimaxm2.5-fp4-b200-vllm:
3144+
image: vllm/vllm-openai:v0.19.0-cu130
3145+
model: nvidia/MiniMax-M2.5-NVFP4
3146+
model-prefix: minimaxm2.5
3147+
runner: b200
3148+
precision: fp4
3149+
framework: vllm
3150+
multinode: false
3151+
seq-len-configs:
3152+
- isl: 1024
3153+
osl: 1024
3154+
search-space:
3155+
- { tp: 1, conc-start: 4, conc-end: 4 }
3156+
- { tp: 2, conc-start: 4, conc-end: 512 }
3157+
- { tp: 2, ep: 2, conc-start: 128, conc-end: 256 }
3158+
- { tp: 2, ep: 2, dp-attn: true, conc-start: 512, conc-end: 512 }
3159+
- { tp: 4, conc-start: 4, conc-end: 512 }
3160+
- { tp: 4, ep: 4, conc-start: 32, conc-end: 128 }
3161+
- { tp: 8, conc-start: 4, conc-end: 4 }
3162+
- isl: 8192
3163+
osl: 1024
3164+
search-space:
3165+
- { tp: 1, conc-start: 4, conc-end: 32 }
3166+
- { tp: 1, conc-start: 256, conc-end: 512 }
3167+
- { tp: 2, conc-start: 4, conc-end: 512 }
3168+
- { tp: 2, ep: 2, conc-start: 128, conc-end: 512 }
3169+
- { tp: 4, conc-start: 4, conc-end: 512 }
3170+
- { tp: 8, conc-start: 4, conc-end: 4 }
3171+
31433172
gptoss-fp4-h100-vllm:
31443173
image: vllm/vllm-openai:v0.18.0
31453174
model: openai/gpt-oss-120b
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
#!/usr/bin/env bash
2+
3+
source "$(dirname "$0")/../benchmark_lib.sh"
4+
5+
check_env_vars \
6+
MODEL \
7+
TP \
8+
EP_SIZE \
9+
DP_ATTENTION \
10+
CONC \
11+
ISL \
12+
OSL \
13+
MAX_MODEL_LEN \
14+
RANDOM_RANGE_RATIO \
15+
RESULT_FILENAME
16+
17+
if [[ -n "$SLURM_JOB_ID" ]]; then
18+
echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME"
19+
fi
20+
21+
nvidia-smi
22+
23+
hf download "$MODEL"
24+
25+
SERVER_LOG=/workspace/server.log
26+
PORT=${PORT:-8888}
27+
28+
if [ "${DP_ATTENTION}" = "true" ]; then
29+
PARALLEL_ARGS="--tensor-parallel-size=1 --data-parallel-size=$TP --enable-expert-parallel"
30+
elif [ "$EP_SIZE" -gt 1 ]; then
31+
PARALLEL_ARGS="--tensor-parallel-size=$TP --enable-expert-parallel"
32+
else
33+
PARALLEL_ARGS="--tensor-parallel-size=$TP"
34+
fi
35+
36+
if [ "${EVAL_ONLY}" = "true" ]; then
37+
setup_eval_context
38+
MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN"
39+
fi
40+
# Start GPU monitoring (power, temperature, clocks every second)
41+
start_gpu_monitor
42+
43+
set -x
44+
vllm serve $MODEL --port $PORT \
45+
$PARALLEL_ARGS \
46+
--gpu-memory-utilization 0.90 \
47+
--max-model-len $MAX_MODEL_LEN \
48+
--kv-cache-dtype fp8 \
49+
--max-cudagraph-capture-size 2048 \
50+
--max-num-batched-tokens "$((ISL * 2 ))" \
51+
--stream-interval 20 --no-enable-prefix-caching \
52+
--trust-remote-code > $SERVER_LOG 2>&1 &
53+
54+
SERVER_PID=$!
55+
56+
# Wait for server to be ready
57+
wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID"
58+
59+
run_benchmark_serving \
60+
--model "$MODEL" \
61+
--port "$PORT" \
62+
--backend vllm \
63+
--input-len "$ISL" \
64+
--output-len "$OSL" \
65+
--random-range-ratio "$RANDOM_RANGE_RATIO" \
66+
--num-prompts "$((CONC * 10))" \
67+
--max-concurrency "$CONC" \
68+
--result-filename "$RESULT_FILENAME" \
69+
--result-dir /workspace/ \
70+
--trust-remote-code
71+
72+
# After throughput, run evaluation only if RUN_EVAL is true
73+
if [ "${RUN_EVAL}" = "true" ]; then
74+
run_eval --framework lm-eval --port "$PORT"
75+
append_lm_eval_summary
76+
fi
77+
78+
# Stop GPU monitoring
79+
stop_gpu_monitor
80+
set +x

perf-changelog.yaml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1297,3 +1297,11 @@
12971297
description:
12981298
- "Update MiniMax-M2.5 FP8 B200 config with new search spaces"
12991299
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1010
1300+
1301+
- config-keys:
1302+
- minimaxm2.5-fp4-b200-vllm
1303+
description:
1304+
- "Optimize MiniMax-M2.5 NVFP4 B200 vLLM search-space"
1305+
- "Expand from tp2/tp4 to tp1/tp2/tp4/tp8 with expert parallel and dp-attn variants"
1306+
- "Add ep2, ep4, and dp-attn configurations for higher concurrency sweeps"
1307+
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/996

0 commit comments

Comments
 (0)