Skip to content

Commit df43f85

Browse files
committed
Bring back the old behavior
Signed-off-by: Sasha Bogicevic <sasha.bogicevic@iohk.io>
1 parent c65d9dd commit df43f85

File tree

4 files changed

+45
-28
lines changed

4 files changed

+45
-28
lines changed

.github/workflows/ci-nix.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ jobs:
9191
options: '-o $(pwd)/../benchmarks/ledger-bench.html'
9292
- package: hydra-cluster
9393
bench: bench-e2e
94-
options: 'standalone datasets/1-node.json datasets/3-nodes.json --output-directory $(pwd)/../benchmarks --timeout 1000s'
94+
options: 'single datasets/1-node.json datasets/3-nodes.json --output-directory $(pwd)/../benchmarks --timeout 1000s'
9595
steps:
9696
- name: 📥 Checkout repository
9797
uses: actions/checkout@v4

hydra-cluster/README.md

Lines changed: 27 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -130,33 +130,39 @@ produces a `results.csv` file in a work directory. To plot the transaction
130130
confirmation times you can use the `bench/plot.sh` script, passing it the
131131
directory containing the benchmark's results.
132132
133-
To run and plot results of the benchmark:
133+
For the benchmarks, you can choose between generating either a constant-size
134+
UTxO set or a growing UTxO set.
134135
135-
- Generate the dataset
136+
Constant UTxO set:
137+
Each transaction spends one input and creates exactly one new output (1-in-1-out), so the total number of
138+
UTxOs in the set remains roughly the same over time.
136139
137-
```sh
138-
cabal run bench-e2e -- dataset --number-of-txs 10 --output-directory 1"
139-
```
140+
Growing UTxO set:
141+
Each transaction spends one input but creates two outputs gradually increasing the total number of UTxOs as more
142+
transactions are processed. For this we use the `--number-of-txs` argument.
143+
144+
This distinction allows you to measure performance under different realistic UTxO-set growth scenarios on Cardano.
140145
141-
- Run the generated dataset
146+
147+
To generate, run and then plot results of the benchmark:
142148
143149
```sh
144-
cabal run bench-e2e -- standalone 1/dataset.json --output-directory out"
150+
cabal run bench-e2e -- datasets --number-of-txs 10 --output-directory out
145151
./hydra-cluster/bench/plot.sh out
146152
```
147153
148154
Which will produce an output like:
149155
150156
```
151-
Reading dataset from: 1/dataset.json
152-
Running benchmark with datasets: ["1/dataset.json"]
157+
Writing dataset to: out/dataset.json
158+
Saved dataset in: out/dataset.json
153159
Test logs available in: out/test.log
154160
Starting benchmark
155161
Seeding network
156162
Fund scenario from faucet
157-
Fuel node key "92caede6c58affa96718ab4f47bb34639c135df3a7428aa118b13f25236c02e9"
158-
Fuel node key "17a705d22d4ee258400067ee7c8c3a314513f24c6271c8524e085049d1fdd449"
159-
Fuel node key "9951c3506f6f56e3d1871c8a2a0e88e61d32593663f9585e10d3da93b9caec87"
163+
Fuel node key "006ba2f18d2e08f1cb96d3a425090768e3b6dc5e7f613a882509a02af668e6d7"
164+
Fuel node key "33184090500d0c26994df825800d169021e6dc32ecf1633d0903c28eecd87830"
165+
Fuel node key "d7f2a66d3f7bc9bdf135ad28b5106ee751aa5725d767336a2aa1ee19a5532c00"
160166
Publishing hydra scripts
161167
Starting hydra cluster in out
162168
Initializing Head
@@ -172,24 +178,24 @@ Closing the Head
172178
Writing results to: out/results.csv
173179
Finalizing the Head
174180
Confirmed txs/Total expected txs: 30/30 (100.00 %)
175-
Average confirmation time (ms): 60.917365233
176-
P99: 74.32681356ms
177-
P95: 72.72738555ms
178-
P50: 62.208124ms
181+
Average confirmation time (ms): 59.977068200
182+
P99: 75.43316676ms
183+
P95: 70.41318959999998ms
184+
P50: 60.638328ms
179185
Invalid txs: 0
180186
Fanout outputs: 3
181187
Writing report to: out/end-to-end-benchmarks.md
182188
183-
./hydra-cluster/bench/plot.sh out-standalone
184-
line 0: warning: Cannot find or open file "out-standalone/system.csv"
185-
Created plot: out-standalone/results.png
189+
./hydra-cluster/bench/plot.sh out
190+
line 0: warning: Cannot find or open file "out/system.csv"
191+
Created plot: out/results.png
186192
```
187193
188194
Note that if it's present in the environment, benchmark executable will gather basic system-level statistics about the RAM, CPU, and network bandwidth used. The `plot.sh` script then displays those alongside tx confirmation time in a single graph.
189195
190196
The benchmark can be run in three modes:
191197
192-
* `standalone`: Benchmark a single or multiple _datasets_.
193-
* `dataset`: Generates a _dataset_. This is useful to track the evolution of hydra-node's performance over some well-known datasets over time and produce a human-readable summary.
198+
* `single`: Generate a single _dataset_ and runs the benchmark with it.
199+
* `datasets`: Runs one or more pre-existing _datasets_ in sequence and collect their results in a single markdown formatted file. This is useful to track the evolution of hydra-node's performance over some well-known datasets over time and produce a human-readable summary.
194200
* `demo`: Generates transactions against an already running network of cardano and hydra nodes. This can serve as a workload when testing network-resilience scenarios, such as packet loss or node failures. See [this CI workflow](https://github.com/cardano-scaling/hydra/blob/master/.github/workflows/network-test.yaml) for how it is used.
195201

hydra-cluster/bench/Bench/Options.hs

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,11 @@ data Options
4040
}
4141
| DatasetOptions
4242
{ outputDirectory :: Maybe FilePath
43+
, timeoutSeconds :: NominalDiffTime
4344
, datasetUTxO :: UTxOSize
4445
, numberOfTxs :: Int
4546
, clusterSize :: Word64
47+
, startingNodeId :: Int
4648
}
4749
| DemoOptions
4850
{ outputDirectory :: Maybe FilePath
@@ -57,8 +59,8 @@ benchOptionsParser :: ParserInfo Options
5759
benchOptionsParser =
5860
info
5961
( hsubparser
60-
( command "standalone" standaloneOptionsInfo
61-
<> command "dataset" datasetOptionsInfo
62+
( command "single" standaloneOptionsInfo
63+
<> command "datasets" datasetOptionsInfo
6264
<> command "demo" demoOptionsInfo
6365
)
6466
<**> helper
@@ -197,17 +199,20 @@ datasetOptionsInfo =
197199
info
198200
datasetOptionsParser
199201
( progDesc
200-
"Generate and run one or several dataset files, concatenating the \
201-
\ output to single document."
202+
"Run scenarios from one or several dataset files, concatenating the \
203+
\ output to single document. This is useful to produce a summary \
204+
\ page describing alternative runs."
202205
)
203206

204207
datasetOptionsParser :: Parser Options
205208
datasetOptionsParser =
206209
DatasetOptions
207210
<$> optional outputDirectoryParser
211+
<*> timeoutParser
208212
<*> utxoSizeParser
209213
<*> numberOfTxsParser
210214
<*> clusterSizeParser
215+
<*> startingNodeIdParser
211216

212217
filepathParser :: Parser FilePath
213218
filepathParser =

hydra-cluster/bench/Main.hs

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,20 @@ main = do
4141
benchDemo networkId nodeSocket timeoutSeconds hydraClients
4242
summarizeResults outputDirectory [results]
4343
removeDirectoryRecursive workDir
44-
DatasetOptions{outputDirectory, datasetUTxO, numberOfTxs, clusterSize} -> do
44+
DatasetOptions{outputDirectory, timeoutSeconds, datasetUTxO, numberOfTxs, clusterSize, startingNodeId} -> do
4545
(_, faucetSk) <- keysFor Faucet
4646
workDir <- maybe (createTempDir "bench-e2e") checkEmpty outputDirectory
47+
let action = bench startingNodeId timeoutSeconds
4748
dataset <- generate $ case datasetUTxO of
4849
Constant -> generateConstantUTxODataset faucetSk (fromIntegral clusterSize) numberOfTxs
4950
Growing -> generateGrowingUTxODataset faucetSk (fromIntegral clusterSize) numberOfTxs
5051
saveDataset (workDir </> "dataset.json") dataset
5152
putStrLn $ "Saved dataset in: " <> (workDir </> "dataset.json")
53+
results <- do
54+
-- XXX: Wait between each bench run to give the OS time to cleanup resources??
55+
threadDelay 10
56+
runSingle dataset workDir action
57+
summarizeResults outputDirectory [results]
5258
where
5359
checkEmpty fp = do
5460
createDirectoryIfMissing True fp
@@ -83,7 +89,7 @@ main = do
8389

8490
loadDataset :: FilePath -> IO Dataset
8591
loadDataset f = do
86-
putStrLn $ "Reading dataset from: " <> f
92+
putStrLn $ "Reading datasets from: " <> f
8793
eitherDecodeFileStrict' f >>= either (die . show) pure
8894

8995
saveDataset :: FilePath -> Dataset -> IO ()

0 commit comments

Comments
 (0)