Skip to content

Commit c6ea86d

Browse files
authored
Merge pull request #31 from GATEOverflow/mlperf-inference-results-scc24
Fix conflicts
2 parents 62a1234 + 8a43462 commit c6ea86d

35 files changed

+1095
-631
lines changed

README.md

+24-23
Large diffs are not rendered by default.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
2+
|---------------------|------------|-----------------------|--------------|-------------------|
3+
| stable-diffusion-xl | offline | (15.18544, 235.69504) | 0.375 | - |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).
2+
3+
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
4+
5+
## Host platform
6+
7+
* OS version: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
8+
* CPU version: x86_64
9+
* Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
10+
* MLCommons CM version: 3.0.1
11+
12+
## CM Run Command
13+
14+
See [CM installation guide](https://docs.mlcommons.org/inference/install/).
15+
16+
```bash
17+
pip install -U cmind
18+
19+
cm rm cache -f
20+
21+
cm pull repo gateoverflow@cm4mlops --checkout=d9fa259a9a0ee541d34b4a7f2beafd95a1381c0e
22+
23+
cm run script \
24+
--tags=app,mlperf,inference,generic,_reference,_sdxl,_pytorch,_cuda,_test,_r4.1-dev_default,_float16,_offline \
25+
--quiet=true \
26+
--env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes \
27+
--env.CM_QUIET=yes \
28+
--env.CM_MLPERF_IMPLEMENTATION=reference \
29+
--env.CM_MLPERF_MODEL=sdxl \
30+
--env.CM_MLPERF_RUN_STYLE=test \
31+
--env.CM_MLPERF_BACKEND=pytorch \
32+
--env.CM_MLPERF_SUBMISSION_SYSTEM_TYPE=datacenter \
33+
--env.CM_MLPERF_CLEAN_ALL=True \
34+
--env.CM_MLPERF_DEVICE=cuda \
35+
--env.CM_MLPERF_USE_DOCKER=True \
36+
--env.CM_MLPERF_MODEL_PRECISION=float16 \
37+
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
38+
--env.CM_MLPERF_LOADGEN_SCENARIO=Offline \
39+
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
40+
--env.CM_MLPERF_INFERENCE_VERSION=4.1-dev \
41+
--env.CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r4.1-dev_default \
42+
--env.CM_MLPERF_SUBMISSION_GENERATION_STYLE=short \
43+
--env.CM_MLPERF_SUT_NAME_RUN_CONFIG_SUFFIX4=scc24-base \
44+
--env.CM_DOCKER_IMAGE_NAME=scc24-reference \
45+
--env.CM_MLPERF_LOADGEN_ALL_MODES=yes \
46+
--env.CM_MLPERF_LAST_RELEASE=v4.0 \
47+
--env.CM_TMP_CURRENT_PATH=/home/arjun/actions-runner/_work/cm4mlops/cm4mlops \
48+
--env.CM_TMP_PIP_VERSION_STRING= \
49+
--env.CM_MODEL=sdxl \
50+
--env.CM_MLPERF_LOADGEN_COMPLIANCE=no \
51+
--env.CM_MLPERF_CLEAN_SUBMISSION_DIR=yes \
52+
--env.CM_RERUN=yes \
53+
--env.CM_MLPERF_LOADGEN_EXTRA_OPTIONS= \
54+
--env.CM_MLPERF_LOADGEN_MODE=performance \
55+
--env.CM_MLPERF_LOADGEN_SCENARIOS,=Offline \
56+
--env.CM_MLPERF_LOADGEN_MODES,=performance,accuracy \
57+
--env.CM_OUTPUT_FOLDER_NAME=test_results \
58+
--add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r4_1-dev \
59+
--add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r4_1-dev \
60+
--add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r4_1-dev \
61+
--add_deps_recursive.submission-checker.tags=_short-run \
62+
--add_deps_recursive.coco2014-preprocessed.tags=_size.50,_with-sample-ids \
63+
--add_deps_recursive.coco2014-dataset.tags=_size.50,_with-sample-ids \
64+
--add_deps_recursive.nvidia-preprocess-data.extra_cache_tags=scc24-base \
65+
--v=False \
66+
--print_env=False \
67+
--print_deps=False \
68+
--dump_version_info=True \
69+
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
70+
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
71+
--env.SDXL_CHECKPOINT_PATH=/home/cmuser/CM/repos/local/cache/6be1f30ecbde4c4e/stable_diffusion_fp16
72+
```
73+
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
74+
you should simply reload gateoverflow@cm4mlops without checkout and clean CM cache as follows:*
75+
76+
```bash
77+
cm rm repo gateoverflow@cm4mlops
78+
cm pull repo gateoverflow@cm4mlops
79+
cm rm cache -f
80+
81+
```
82+
83+
## Results
84+
85+
Platform: a51568200dc1-reference-gpu-pytorch_v2.4.1-scc24-base_cu124
86+
87+
Model Precision: fp32
88+
89+
### Accuracy Results
90+
`CLIP_SCORE`: `15.18544`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
91+
`FID_SCORE`: `235.69504`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`
92+
93+
### Performance Results
94+
`Samples per second`: `0.375287`
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"starting_weights_filename": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0",
3+
"retraining": "no",
4+
"input_data_types": "fp32",
5+
"weight_data_types": "fp32",
6+
"weight_transformations": "no"
7+
}

open/MLCommons/measurements/a51568200dc1-reference-gpu-pytorch_v2.4.1-scc24-base_cu124/stable-diffusion-xl/offline/accuracy_console.out

Whitespace-only changes.

0 commit comments

Comments
 (0)