Skip to content

Commit 88183a5

Browse files
authored
Revived branch from growing utxo benchmark (#2443)
Revive of #2431 This was changing too much things and resulted in a too annoying developer experience of using `cabal bench hydra-cluster`. So we tried again... Fixes #2441 --- <!-- Consider each and tick it off one way or the other --> * [ ] CHANGELOG updated or not needed * [ ] Documentation updated or not needed * [ ] Haddocks updated or not needed * [ ] No new TODOs introduced or explained herafter
2 parents 404ed4c + df43f85 commit 88183a5

File tree

9 files changed

+160
-70
lines changed

9 files changed

+160
-70
lines changed

.github/workflows/ci-nix.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ jobs:
9191
options: '-o $(pwd)/../benchmarks/ledger-bench.html'
9292
- package: hydra-cluster
9393
bench: bench-e2e
94-
options: 'datasets datasets/1-node.json datasets/3-nodes.json --output-directory $(pwd)/../benchmarks --timeout 1000s'
94+
options: 'single datasets/1-node.json datasets/3-nodes.json --output-directory $(pwd)/../benchmarks --timeout 1000s'
9595
steps:
9696
- name: 📥 Checkout repository
9797
uses: actions/checkout@v4

.github/workflows/network-test.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ jobs:
108108
nix run .#legacyPackages.x86_64-linux.hydra-cluster.components.benchmarks.bench-e2e -- \
109109
demo \
110110
--output-directory=benchmarks \
111-
--scaling-factor="$scaling_factor" \
111+
--number-of-txs="$scaling_factor" \
112112
--timeout=1200s \
113113
--testnet-magic 42 \
114114
--node-socket=demo/devnet/node.socket \

hydra-cluster/README.md

Lines changed: 35 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -130,41 +130,63 @@ produces a `results.csv` file in a work directory. To plot the transaction
130130
confirmation times you can use the `bench/plot.sh` script, passing it the
131131
directory containing the benchmark's results.
132132
133-
To run and plot results of the benchmark:
133+
For the benchmarks, you can choose between generating either a constant-size
134+
UTxO set or a growing UTxO set.
135+
136+
Constant UTxO set:
137+
Each transaction spends one input and creates exactly one new output (1-in-1-out), so the total number of
138+
UTxOs in the set remains roughly the same over time.
139+
140+
Growing UTxO set:
141+
Each transaction spends one input but creates two outputs gradually increasing the total number of UTxOs as more
142+
transactions are processed. For this we use the `--number-of-txs` argument.
143+
144+
This distinction allows you to measure performance under different realistic UTxO-set growth scenarios on Cardano.
145+
146+
147+
To generate, run and then plot results of the benchmark:
134148
135149
```sh
136-
cabal run bench-e2e -- single --output-directory out"
137-
bench/plot.sh out
150+
cabal run bench-e2e -- datasets --number-of-txs 10 --output-directory out
151+
./hydra-cluster/bench/plot.sh out
138152
```
139153
140154
Which will produce an output like:
141155
142156
```
143-
Generating dataset with scaling factor: 10
144157
Writing dataset to: out/dataset.json
158+
Saved dataset in: out/dataset.json
145159
Test logs available in: out/test.log
146160
Starting benchmark
147161
Seeding network
148162
Fund scenario from faucet
149-
Fuel node key "16e61ed92346eb0b0bd1c6d8c0f924b4d1278996a61043a0a42afad193e5f3fb"
163+
Fuel node key "006ba2f18d2e08f1cb96d3a425090768e3b6dc5e7f613a882509a02af668e6d7"
164+
Fuel node key "33184090500d0c26994df825800d169021e6dc32ecf1633d0903c28eecd87830"
165+
Fuel node key "d7f2a66d3f7bc9bdf135ad28b5106ee751aa5725d767336a2aa1ee19a5532c00"
150166
Publishing hydra scripts
151167
Starting hydra cluster in out
152168
Initializing Head
153169
Committing initialUTxO from dataset
154170
HeadIsOpen
155-
Client 1 (node 0): 0/300 (0.00%)
156-
Client 1 (node 0): 266/300 (88.67%)
171+
Client 1 (node 0): 1/10 (10.00%)
172+
Client 2 (node 1): 1/10 (10.00%)
173+
Client 3 (node 2): 1/10 (10.00%)
174+
All transactions confirmed. Sweet!
175+
All transactions confirmed. Sweet!
157176
All transactions confirmed. Sweet!
158177
Closing the Head
159-
Finalizing the Head
160178
Writing results to: out/results.csv
161-
Confirmed txs/Total expected txs: 300/300 (100.00 %)
162-
Average confirmation time (ms): 18.747147496
163-
P99: 23.100851369999994ms
164-
P95: 19.81722345ms
165-
P50: 18.532922ms
179+
Finalizing the Head
180+
Confirmed txs/Total expected txs: 30/30 (100.00 %)
181+
Average confirmation time (ms): 59.977068200
182+
P99: 75.43316676ms
183+
P95: 70.41318959999998ms
184+
P50: 60.638328ms
166185
Invalid txs: 0
186+
Fanout outputs: 3
167187
Writing report to: out/end-to-end-benchmarks.md
188+
189+
./hydra-cluster/bench/plot.sh out
168190
line 0: warning: Cannot find or open file "out/system.csv"
169191
Created plot: out/results.png
170192
```

hydra-cluster/bench/Bench/EndToEnd.hs

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ import Hydra.Prelude
66
import Test.Hydra.Prelude
77

88
import Bench.Summary (Summary (..), SystemStats, makeQuantiles)
9+
import Cardano.Api.UTxO qualified as UTxO
910
import CardanoNode (findRunningCardanoNode', withCardanoNodeDevnet)
1011
import Control.Concurrent.Class.MonadSTM (
1112
MonadSTM (readTVarIO),
@@ -19,14 +20,15 @@ import Control.Lens (to, (^..), (^?))
1920
import Control.Monad.Class.MonadAsync (mapConcurrently)
2021
import Data.Aeson (Result (Error, Success), Value, encode, fromJSON, (.=))
2122
import Data.Aeson.Lens (key, values, _JSON, _Number, _String)
23+
import Data.Aeson.Types (parseEither)
2224
import Data.ByteString.Lazy qualified as LBS
2325
import Data.List qualified as List
2426
import Data.Map qualified as Map
2527
import Data.Scientific (Scientific)
2628
import Data.Set ((\\))
2729
import Data.Set qualified as Set
2830
import Data.Time (UTCTime (UTCTime), utctDayTime)
29-
import Hydra.Cardano.Api (NetworkId, SocketPath, Tx, TxId, UTxO, getVerificationKey, lovelaceToValue, signTx)
31+
import Hydra.Cardano.Api (Era, NetworkId, SocketPath, Tx, TxId, UTxO, getVerificationKey, lovelaceToValue, signTx)
3032
import Hydra.Chain.Backend (ChainBackend)
3133
import Hydra.Chain.Backend qualified as Backend
3234
import Hydra.Cluster.Faucet (FaucetLog (..), publishHydraScriptsAs, returnFundsToFaucet', seedFromFaucet)
@@ -177,21 +179,23 @@ scenario hydraTracer backend workDir Dataset{clientDatasets, title, description}
177179
guard $ v ^? key "headId" == Just (toJSON headId)
178180
v ^? key "contestationDeadline" . _JSON
179181

182+
-- Write the results already in case we cannot finalize
183+
let res = mapMaybe analyze . Map.toList $ processedTransactions
184+
aggregates = movingAverage res
185+
186+
writeResultsCsv (workDir </> "results.csv") aggregates
187+
180188
-- Expect to see ReadyToFanout within 3 seconds after deadline
181189
remainingTime <- diffUTCTime deadline <$> getCurrentTime
182190
waitFor hydraTracer (remainingTime + 3) [leader] $
183191
output "ReadyToFanout" ["headId" .= headId]
184192

185193
putTextLn "Finalizing the Head"
186194
send leader $ input "Fanout" []
187-
waitMatch 100 leader $ \v -> do
195+
finalUTxOJSON <- waitMatch 100 leader $ \v -> do
188196
guard (v ^? key "tag" == Just "HeadIsFinalized")
189197
guard $ v ^? key "headId" == Just (toJSON headId)
190-
191-
let res = mapMaybe analyze . Map.toList $ processedTransactions
192-
aggregates = movingAverage res
193-
194-
writeResultsCsv (workDir </> "results.csv") aggregates
198+
v ^? key "utxo"
195199

196200
let confTimes = map (\(_, _, a) -> a) res
197201
numberOfTxs = length confTimes
@@ -200,6 +204,10 @@ scenario hydraTracer backend workDir Dataset{clientDatasets, title, description}
200204
quantiles = makeQuantiles confTimes
201205
summaryTitle = fromMaybe "Baseline Scenario" title
202206
summaryDescription = fromMaybe defaultDescription description
207+
numberOfFanoutOutputs =
208+
case parseEither (parseJSON @(UTxO.UTxO Era)) finalUTxOJSON of
209+
Left _ -> error "Failed to decode Fanout UTxO"
210+
Right fanoutUTxO -> UTxO.size fanoutUTxO
203211

204212
pure $
205213
Summary
@@ -211,6 +219,7 @@ scenario hydraTracer backend workDir Dataset{clientDatasets, title, description}
211219
, summaryTitle
212220
, summaryDescription
213221
, numberOfInvalidTxs
222+
, numberOfFanoutOutputs
214223
}
215224

216225
defaultDescription :: Text

hydra-cluster/bench/Bench/Options.hs

Lines changed: 32 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -29,23 +29,26 @@ import Options.Applicative (
2929
)
3030
import Options.Applicative.Builder (argument)
3131

32+
data UTxOSize = Constant | Growing deriving (Eq, Show, Read)
33+
3234
data Options
3335
= StandaloneOptions
34-
{ scalingFactor :: Int
35-
, clusterSize :: Word64
36+
{ datasetFiles :: [FilePath]
3637
, outputDirectory :: Maybe FilePath
3738
, timeoutSeconds :: NominalDiffTime
3839
, startingNodeId :: Int
3940
}
4041
| DatasetOptions
41-
{ datasetFiles :: [FilePath]
42-
, outputDirectory :: Maybe FilePath
42+
{ outputDirectory :: Maybe FilePath
4343
, timeoutSeconds :: NominalDiffTime
44+
, datasetUTxO :: UTxOSize
45+
, numberOfTxs :: Int
46+
, clusterSize :: Word64
4447
, startingNodeId :: Int
4548
}
4649
| DemoOptions
4750
{ outputDirectory :: Maybe FilePath
48-
, scalingFactor :: Int
51+
, numberOfTxs :: Int
4952
, timeoutSeconds :: NominalDiffTime
5053
, networkId :: NetworkId
5154
, nodeSocket :: SocketPath
@@ -78,13 +81,12 @@ standaloneOptionsInfo :: ParserInfo Options
7881
standaloneOptionsInfo =
7982
info
8083
standaloneOptionsParser
81-
(progDesc "Runs a single scenario, generating or reusing a previous dataset from some directory.")
84+
(progDesc "Runs a scenario reusing a previous dataset/s from some directory.")
8285

8386
standaloneOptionsParser :: Parser Options
8487
standaloneOptionsParser =
8588
StandaloneOptions
86-
<$> scalingFactorParser
87-
<*> clusterSizeParser
89+
<$> many filepathParser
8890
<*> optional outputDirectoryParser
8991
<*> timeoutParser
9092
<*> startingNodeIdParser
@@ -99,14 +101,14 @@ outputDirectoryParser =
99101
\ If not set, raw text summary will be printed to the console. (default: none)"
100102
)
101103

102-
scalingFactorParser :: Parser Int
103-
scalingFactorParser =
104+
numberOfTxsParser :: Parser Int
105+
numberOfTxsParser =
104106
option
105107
auto
106-
( long "scaling-factor"
108+
( long "number-of-txs"
107109
<> value 100
108110
<> metavar "INT"
109-
<> help "The scaling factor to apply to transactions generator (default: 100)"
111+
<> help "Number of txs to generate (default: 100)"
110112
)
111113

112114
timeoutParser :: Parser NominalDiffTime
@@ -146,6 +148,19 @@ startingNodeIdParser =
146148
\ benchmark conflicts with default ports allocation scheme (default: 0)"
147149
)
148150

151+
utxoSizeParser :: Parser UTxOSize
152+
utxoSizeParser =
153+
option
154+
auto
155+
( long "utxo-size"
156+
<> value Constant
157+
<> metavar "UTxOSize"
158+
<> help
159+
"Generated UTxO size. This can be 'Constant' where UTxO set has constant size \
160+
\ depending on the number of generated txs or 'Growing' where each new generated \
161+
\ transaction produces one extra output which makes the UTxO in the Head grow."
162+
)
163+
149164
demoOptionsInfo :: ParserInfo Options
150165
demoOptionsInfo =
151166
info
@@ -161,7 +176,7 @@ demoOptionsParser :: Parser Options
161176
demoOptionsParser =
162177
DemoOptions
163178
<$> optional outputDirectoryParser
164-
<*> scalingFactorParser
179+
<*> numberOfTxsParser
165180
<*> timeoutParser
166181
<*> networkIdParser
167182
<*> nodeSocketParser
@@ -192,9 +207,11 @@ datasetOptionsInfo =
192207
datasetOptionsParser :: Parser Options
193208
datasetOptionsParser =
194209
DatasetOptions
195-
<$> many filepathParser
196-
<*> optional outputDirectoryParser
210+
<$> optional outputDirectoryParser
197211
<*> timeoutParser
212+
<*> utxoSizeParser
213+
<*> numberOfTxsParser
214+
<*> clusterSizeParser
198215
<*> startingNodeIdParser
199216

200217
filepathParser :: Parser FilePath

hydra-cluster/bench/Bench/Summary.hs

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ data Summary = Summary
2929
, summaryTitle :: Text
3030
, summaryDescription :: Text
3131
, quantiles :: Vector Double
32+
, numberOfFanoutOutputs :: Int
3233
}
3334
deriving stock (Generic, Eq, Show)
3435

@@ -44,6 +45,7 @@ errorSummary Dataset{title, clientDatasets} (HUnitFailure sourceLocation reason)
4445
, summaryDescription =
4546
pack $ "Benchmark failed " <> formatLocation sourceLocation <> ": " <> formatFailureReason reason
4647
, quantiles = mempty
48+
, numberOfFanoutOutputs = 0
4749
}
4850
where
4951
formatLocation = maybe "" (\loc -> "at " <> prettySrcLoc loc)
@@ -54,7 +56,7 @@ makeQuantiles times =
5456
Statistics.quantilesVec def (fromList [0 .. 99]) 100 (fromList $ map (fromRational . (* 1000) . toRational . nominalDiffTimeToSeconds) times)
5557

5658
textReport :: (Summary, SystemStats) -> [Text]
57-
textReport (Summary{totalTxs, numberOfTxs, averageConfirmationTime, quantiles, numberOfInvalidTxs}, systemStats) =
59+
textReport (Summary{totalTxs, numberOfTxs, averageConfirmationTime, quantiles, numberOfInvalidTxs, numberOfFanoutOutputs}, systemStats) =
5860
let frac :: Double
5961
frac = 100 * fromIntegral numberOfTxs / fromIntegral totalTxs
6062
in [ pack $ printf "Confirmed txs/Total expected txs: %d/%d (%.2f %%)" numberOfTxs totalTxs frac
@@ -69,6 +71,7 @@ textReport (Summary{totalTxs, numberOfTxs, averageConfirmationTime, quantiles, n
6971
else []
7072
)
7173
++ ["Invalid txs: " <> show numberOfInvalidTxs]
74+
++ ["Fanout outputs: " <> show numberOfFanoutOutputs]
7275
++ if null systemStats then [] else "\n### Memory data \n" : [unlines systemStats]
7376

7477
markdownReport :: UTCTime -> [(Summary, SystemStats)] -> [Text]

0 commit comments

Comments
 (0)