Skip to content

Commit a3e86c3

Browse files
committed
run_benchmarks.py script and folder
Signed-off-by: adir <adir@il.ibm.com>
1 parent ecacf83 commit a3e86c3

File tree

2 files changed

+33
-2
lines changed

2 files changed

+33
-2
lines changed

cmd/benchmarking/run_benchmarks.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -242,7 +242,6 @@ def append_dict_as_row(filename: str, data: dict):
242242
('BenchmarkTransferProofGeneration', "", ""),
243243
('BenchmarkIssuer', "", issuer_benchmarks_folder),
244244
('BenchmarkProofVerificationIssuer', "", issuer_benchmarks_folder),
245-
('BenchmarkVerificationSenderProof', "", ""),
246245
('BenchmarkTransferServiceTransfer', "", v1_benchmarks_folder),
247246
]
248247
parallel_tests = [

docs/benchmark/core/dlognogh/dlognogh.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -215,4 +215,36 @@ Timeline: [▇▇▇█▇▇▇▇▆▇] (Max: 131 ops/s)
215215
--- PASS: TestParallelBenchmarkSender/Setup(bits_32,_curve_BN254,_#i_2,_#o_2)_with_10_workers (13.96s)
216216
PASS
217217
ok github.com/hyperledger-labs/fabric-token-sdk/token/core/zkatdlog/nogh/v1/transfer 14.566s
218-
```
218+
```
219+
220+
### Running selected benchmarks with run_benchmarks.py
221+
The run_benchmarks.py script runs selected benchmarks and summarizes their results in a csv file, so it could be followed as more optimizations are added.
222+
To run this script goto the ../fabric-token-sdk/cmd/benchmarking folder and run
223+
```shell
224+
python run_benchmarks.py
225+
```
226+
227+
This creates a subfolder that collects the logs of all the benchmarks and the csv file (**benchmark_results.csv**) that has a separate row for every invocation of the script with the selected metrics collected for all the benchmarks.
228+
The folder is named **benchmark_logs_<date>** for example **benchmark_logs_2026-01-19_06-56-41**, where the date indicates the time when the script was run.
229+
230+
The script supports the following flags:
231+
232+
`--count`
233+
: the number of times to run the benchmark
234+
235+
`--timeout`
236+
: the maximum time (in seconds) to for the benchmark. Thje default is 0, implying no limit.
237+
238+
`--benchName`
239+
: The single benchmark that should be run by the script. The default is to run the whole selection of benchmarks.
240+
241+
**Example runs:**
242+
243+
- Running all the selected benchmarks:
244+
```shell
245+
python run_benchmarks.py --benchName BenchmarkSender --timeout 4s --count 5
246+
```
247+
- Running just one selected benchmark 5 times for no more than 4 seconds:
248+
```shell
249+
python run_benchmarks.py --benchName BenchmarkSender --timeout 4s --count 5
250+
```

0 commit comments

Comments
 (0)