Skip to content

Commit ba57d45

Browse files
committed
Updated the Documentation of ianvs by replacing pcb-aoi with cloud-edge collaborative-inference-for-llm example
Updates the Documentation of ianvs by replacing pcb-aoi with cloud-edge collaborative-inference-for-llm example Updated the ianvs Quick Start guide. Replaced the PCB-AOI related content with cloud-edge-collaborative-inference-for-LLM. Updated the ianvs Quick Start guide. Replaced the PCB-AOI related content with cloud-edge-collaborative-inference-for-LLM. Signed-off-by: Aryan <nandaaryan823@gmail.com> changes done on ianvs quick-start guide Signed-off-by: Aryan <nandaaryan823@gmail.com> how-to-use-ianvs-command-line updated to use cloud-edge-collaborative-inference-for-llm as an example instead of pcb-aoi Signed-off-by: Aryan <nandaaryan823@gmail.com> Cloud-Edge-Collaborative-Inference-For-LLM scenarion added to Scenarios section of docs with details of MMLU-5-Shot dataset Signed-off-by: Aryan <nandaaryan823@gmail.com> Joint Inference: Query-Routing Algorithm Added to algorithms section of ianvs documentation Signed-off-by: Aryan <nandaaryan823@gmail.com> benchmarking.yml file present in How to build simulation env updated to use cloud-edge inference for llm instead of pcb-aoi Signed-off-by: Aryan <nandaaryan823@gmail.com> testenv present in how-to-contribute-test-environments.md updated to use cloud-edge-collaborative-inference-for-llm as an example instead of pcb-aoi Signed-off-by: Aryan <nandaaryan823@gmail.com> how-to-contribute-algorithms updated to use cloud-edge-collaborative-inference-for-llm as an example as well Signed-off-by: Aryan <nandaaryan823@gmail.com> images folder removed. from docs/proposalsa/scenarios/cloud-edge-collaborative-inference Signed-off-by: Aryan <nandaaryan823@gmail.com> how-to-test-algorithms updated to include cloud-edge-collaborative-inference-for-llm example Signed-off-by: Aryan <nandaaryan823@gmail.com> leaderboard of cloud-edge-collaborative-inference-for-llm scenario added Signed-off-by: Aryan <nandaaryan823@gmail.com> Testing Joint Inference Learning in Cloud Edge Collaborative Inference for LLM Scenario with Ianvs-MMLU-5-shot dataset added Signed-off-by: Aryan <nandaaryan823@gmail.com> user_interfaces guides updated to use cloud-edge-collaborative-inference-for-llm as an example instead of pcb-aoi Signed-off-by: Aryan <nandaaryan823@gmail.com> index.rst updated to restructure leaderboards as per test-reports Signed-off-by: Aryan <nandaaryan823@gmail.com> cloud-edge-collaborative-inference-for-llm design image Signed-off-by: Aryan <nandaaryan823@gmail.com> cloud-edge-collaborative-inference-for-llm design image Signed-off-by: Aryan <nandaaryan823@gmail.com>
1 parent 91fd1a0 commit ba57d45

18 files changed

Lines changed: 1428 additions & 436 deletions

docs/guides/how-to-build-simulation-env.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This document introduces how to build a edge-cloud AI simulation environment(e.g
44

55
## Introduction to `simulation controller`
66

7-
the `simulation controller` is the core module of system simulation. The simulation controller has been supplemented, which build and deploy local edge-cloud simulation environment with K8s.
7+
The `simulation controller` is the core module of system simulation. The simulation controller has been supplemented, which build and deploy local edge-cloud simulation environment with K8s.
88

99
![](https://github.com/kubeedge/ianvs/blob/main/docs/proposals/simulation/images/simulation_controller.jpg?raw=true)
1010

@@ -36,16 +36,19 @@ Typically, the config file `benchmarkingJob.yaml` is as follows, which represent
3636
benchmarkingjob:
3737
# job name of benchmarking; string type;
3838
name: "benchmarkingjob"
39+
3940
# the url address of job workspace that will reserve the output of tests; string type;
4041
# default value: "./workspace"
41-
workspace: "./workspace/incremental_learning_bench"
42+
workspace: "./workspace-mmlu"
4243

4344
# the url address of test environment configuration file; string type;
4445
# the file format supports yaml/yml;
45-
testenv: "./examples/pcb-aoi/incremental_learning_bench/fault_detection/testenv/testenv.yaml"
46+
testenv: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/testenv.yaml"
47+
4648
# the configuration of test object
4749
test_object:
4850
...
51+
4952
# the configuration of ranking leaderboard
5053
rank:
5154
...

docs/guides/how-to-contribute-algorithms.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,17 @@ Ianvs serves as testing tools for test objects, e.g., algorithms. Ianvs does NOT
44

55
For algorithm contributors, you can:
66

7-
1. Release a repo independent of ianvs, but the interface should still follow the SIG AI algorithm interface to launch ianvs. Here are two examples showing how to develop an algorithm for testing in Ianvs.
8-
Here are two examples show how to development algorithm for testing in Ianvs.
9-
* [incremental-learning]
7+
1. Release a repo independent of ianvs, but the interface should still follow the SIG AI algorithm interface to launch ianvs. Here are few examples showing how to develop an algorithm for testing in Ianvs:
8+
* [cloud-edge-collaborative-inference-for-llm]
109
* [single-task-learning]
11-
2. Integrated the targeted algorithm into sedna so that ianvs can use it directly. in this case, you can connect with sedna owners for help.
10+
* [incremental-learning]
11+
2. Integrate the targeted algorithm into sedna so that ianvs can use it directly. In this case, you can connect with sedna owners for help.
1212

1313
Also, if a new algorithm has already been integrated into Sedna, it can be used in Ianvs directly.
1414

1515
[Sedna Lib]: https://github.com/kubeedge/sedna/tree/main/lib
1616
[incremental-learning]: ../proposals/algorithms/incremental-learning/basicIL-fpn.md
1717
[single-task-learning]: ../proposals/algorithms/single-task-learning/fpn.md
1818
[examples directory]: ../../../../examples
19-
[Sedna repository]: https://github.com/kubeedge/sedna
19+
[Sedna repository]: https://github.com/kubeedge/sedna
20+
[cloud-edge-collaborative-inference-for-llm]: https://github.com/kubeedge/ianvs/tree/main/examples/cloud-edge-collaborative-inference-for-llm

docs/guides/how-to-contribute-test-environments.md

Lines changed: 30 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -20,36 +20,40 @@ testenv:
2020
# dataset configuration
2121
dataset:
2222
# the url address of train dataset index; string type;
23-
train_index: "./dataset/train_data/index.txt"
23+
train_data: "./dataset/mmlu-5-shot/train_data/data.json"
2424
# the url address of test dataset index; string type;
25-
test_index: "./dataset/test_data/index.txt"
26-
27-
# model eval configuration of incremental learning;
28-
model_eval:
29-
# metric used for model evaluation
30-
model_metric:
31-
# metric name; string type;
32-
name: "f1_score"
33-
# the url address of python file
34-
url: "./examples/pcb-aoi/incremental_learning_bench/fault_detection/testenv/f1_score.py"
35-
36-
# condition of triggering inference model to update
37-
# threshold of the condition; types are float/int
38-
threshold: 0.01
39-
# operator of the condition; string type;
40-
# values are ">=", ">", "<=", "<" and "=";
41-
operator: ">="
25+
test_data_info: "./dataset/mmlu-5-shot/test_data/metadata.json"
4226

4327
# metrics configuration for test case's evaluation; list type;
4428
metrics:
4529
# metric name; string type;
46-
- name: "f1_score"
30+
- name: "Accuracy"
4731
# the url address of python file
48-
url: "./examples/pcb-aoi/incremental_learning_bench/fault_detection/testenv/f1_score.py"
49-
- name: "samples_transfer_ratio"
32+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/accuracy.py"
33+
34+
- name: "Edge Ratio"
35+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/edge_ratio.py"
36+
37+
- name: "Cloud Prompt Tokens"
38+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/cloud_prompt_tokens.py"
5039

51-
# incremental rounds setting for incremental learning paradigm.; int type; default value is 2;
52-
incremental_rounds: 2
40+
- name: "Cloud Completion Tokens"
41+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/cloud_completion_tokens.py"
42+
43+
- name: "Edge Prompt Tokens"
44+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/edge_prompt_tokens.py"
45+
46+
- name: "Edge Completion Tokens"
47+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/edge_completion_tokens.py"
48+
49+
- name: "Time to First Token"
50+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/time_to_first_token.py"
51+
52+
- name: "Throughput"
53+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/throughput.py"
54+
55+
- name: "Internal Token Latency"
56+
url: "./examples/cloud-edge-collaborative-inference-for-llm/testenv/internal_token_latency.py"
5357
```
5458
5559
It can be found that for a test, we need to set up the three fields:
@@ -62,11 +66,12 @@ That means, if you want to test on a different dataset, different model, or diff
6266
6367
## Add a new test environment
6468
65-
Please refer to the examples directory, [pcb-aoi](https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi) is a scenario for testing.
69+
Please refer to the examples directory, [cloud-edge-collaborative-inference-for-llm](https://github.com/kubeedge/ianvs/tree/main/examples/cloud-edge-collaborative-inference-for-llm) is a scenario for testing.
6670
We can regard it as a subject for a student that needs to take an exam, the test env is like an examination paper,
6771
and the test job is like the student.
6872
69-
For a subject `pcb-aoi`, a new examination paper could be added to the subdirectory, on the same level as a `benchmarking job`.
73+
For a subject `cloud-edge-collaborative-inference-for-llm`, a new examination paper could be added to the subdirectory, on the same level as a `benchmarking job`.
74+
7075
The detailed steps could be the following:
7176

7277
1. Copy `benchmarking job` and name `benchmarking job_2` or any other intuitive name.

0 commit comments

Comments
 (0)