22
33## Quick Start
44
5- ### Get the source code
6-
7- Clone the MLPerf Inference repo via:
8-
9- ``` bash
10- git clone --recurse-submodules https://github.com/mlcommons/inference.git mlperf-inference
11- ```
12-
13- Then enter the repo:
14-
15- ``` bash
16- cd mlperf-inference/
17- ```
18-
195### Create a Conda environment
206
217Follow [ this link] ( https://www.anaconda.com/docs/getting-started/miniconda/install#quickstart-install-instructions )
@@ -26,53 +12,32 @@ environment via:
2612conda create -n mlperf-inf-mm-vl2l python=3.12
2713```
2814
29- ### Install LoadGen
30-
31- Update ` libstdc++ ` in the conda environment:
15+ ### (Optionally) Update ` libstdc++ ` in the conda environment:
3216
3317``` bash
3418conda install -c conda-forge libstdcxx-ng
3519```
3620
37- Install ` absl-py ` and ` numpy ` :
38-
39- ``` bash
40- conda install absl-py numpy
41- ```
42-
43- Build and install LoadGen from source:
44-
45- ``` bash
46- cd loadgen/
47- CFLAGS=" -std=c++14 -O3" python -m pip install .
48- cd ../
49- ```
50-
51- Run a quick test to validate that LoadGen was installed correctly:
52-
53- ``` bash
54- python loadgen/demos/token_metrics/py_demo_server.py
55- ```
21+ This is only needed when your local environment doesn't have ` libstdc++ ` .
5622
5723### Install the VL2L benchmarking CLI
5824
5925For users, install ` mlperf-inf-mm-vl2l ` with:
6026
6127``` bash
62- pip install multimodal/vl2l/
28+ pip install git+https://github.com/mlcommons/inference.git#subdirectory= multimodal/vl2l
6329```
6430
6531For developers, install ` mlperf-inf-mm-vl2l ` and the development tools with:
66-
67- - On Bash
32+ 1 . Clone the MLPerf Inference repo.
6833``` bash
69- pip install multimodal/vl2l/[dev]
70- ```
71- - On Zsh
72- ``` zsh
73- pip install multimodal/vl2l/" [dev]"
34+ git clone --recurse-submodules https://github.com/mlcommons/inference.git mlperf-inference
7435```
7536
37+ 2 . Install in editable mode with the development tools.
38+ - Bash: ` pip install mlperf-inference/multimodal/vl2l/[dev] `
39+ - Zsh: ` pip install mlperf-inference/multimodal/vl2l/"[dev]" `
40+
7641After installation, you can check the CLI flags that ` mlperf-inf-mm-vl2l ` can take with:
7742
7843``` bash
0 commit comments