Skip to content

Commit b6c6173

Browse files
committed
script revert research changes
1 parent dd130a8 commit b6c6173

File tree

7 files changed

+1
-13
lines changed

7 files changed

+1
-13
lines changed

docs/research/benchmarks/postgres-adoption.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
title: PostgreSQL
33
description: Document that describes why Nim-Waku started to use Postgres and shows some benchmark and comparison results.
4-
displayed_sidebar: research
54
---
65

76
## Introduction

docs/research/benchmarks/test-results-summary.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: Performance Benchmarks and Test Reports
3-
displayed_sidebar: research
43
---
54

65

docs/research/research-and-studies/capped-bandwidth.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: Capped Bandwidth in Waku
3-
displayed_sidebar: research
43
---
54

65
This post explains i) why The Waku Network requires a capped bandwidth per shard and ii) how to achieve it by rate limiting with RLN v2.

docs/research/research-and-studies/incentivisation.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: Incentivisation
3-
displayed_sidebar: research
43
---
54

65
Waku is a family of decentralised communication protocols. The Waku Network (TWN) consists of independent nodes running Waku protocols. TWN needs incentivisation (shortened to i13n) to ensure proper node behaviour.

docs/research/research-and-studies/maximum-bandwidth.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: Maximum Bandwidth for Global Adoption
3-
displayed_sidebar: research
43
---
54

65
**TLDR**: This issue aims to **set the maximum bandwidth** in `x Mbps` that each waku shard should consume so that the **maximum amount of people can run a full waku node**. It is up to https://github.com/waku-org/research/issues/22 to specify how this maximum will be enforced.

docs/research/research-and-studies/message-propagation.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: Message Propagation Times With Waku-RLN
3-
displayed_sidebar: research
43
---
54

65
**TLDR**: We present the results of 1000 `nwaku` nodes running `rln` using different message sizes, in a real network with bandwidth limitations and network delays. The goal is to study the message propagation delay distribution, and how it's affected by i) rln and ii) message size in a real environment. We observe that for messages of `10kB` the average end-to-end propagation delay is `508 ms`. We can also observe that the message propagation delays are severely affected when increasing the message size, which indicates that it is not a good idea to use waku for messages of eg. `500kB`. See simulation parameters.

docs/research/research-and-studies/rln-key-benchmarks.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,23 @@
11
---
22
title: RLN Key Benchmarks
3-
displayed_sidebar: research
43
---
54

65
## Introduction
76

87
Since RLN has been chosen as the spamming protection mechanism for waku, we must understand the practical implications of using it. This issue explains the main differences between `relay` and `rln-relay` and gives some benchmarks after running simulations using `waku-simulator`, in a network with the following characteristics:
9-
108
- 100 nwaku nodes, each one with a valid rln membership and publishing a message every 10 seconds to a common topic.
119
- rln contract deployed in Ethereum Sepolia
1210
- 10.000 memberships registered in the contract
1311
- pure relay (store and light protocols disabled)
1412

1513
The main deltas `rln` vs `rln-relay` are:
16-
1714
- New `proof ` field in `WakuMessage` containing 384 extra bytes. This field must be generated and attached to each message.
1815
- New validator, that uses `proof` to `Accept` or `Reject` the message. The proof has to be verified.
1916
- New dependency on a blockchain, Ethereum, or any EVM chain, to keep track of the members allowed to publish.
2017

2118
But what are the practical implications of these?
2219

2320
## TLDR:
24-
2521
- Proof generation is constant-ish. 0.15 second for each proof
2622
- Proof verification is constant-ish, 0.012 seconds. In a network with 10k nodes and D=6 this would add an overhead delay of 0.06 seconds.
2723
- Gossipsub scoring drops connections from spammer peers, which acts as the punishment (instead of slashing). Validated in the simulation.
@@ -37,7 +33,7 @@ Seems that proof generation times stay constant no matter the size of the messag
3733

3834
On the other hand, rln also adds an overhead in the gossipsub validation process. On average it takes `0.012 seconds` to verify the proof. It seems that when we increase the message size, validation time seems to increase a bit, which can be for any other reason besides rln itself (eg deserializing the message might take longer).
3935

40-
This number seems reasonable and shouldn't affect that much the average delay of a message. Assuming a d-regular graph, with `10k` nodes and `D=6`, we can have up to `log(total_nodes)/log(D)=5` hops. So in the worst case, rln will add a network latency of `0.012*5 = 0.06 seconds`
36+
This number seems reasonable and shouldn't affect that much the average delay of a message. Assuming a d-regular graph, with `10k` nodes and `D=6`, we can have up to `log(total_nodes)/log(D)=5` hops. So in the worst case, rln will add a network latency of `0.012*5 = 0.06 seconds`
4137

4238
![proof-verification-times](imgs/proof-verification-times.png)
4339

@@ -52,15 +48,13 @@ In the following simulation, we can see `100` nwaku interconnected nodes, where
5248
## RLN tree sync
5349

5450
Using RLN implies that waku should now piggyback on a blockchain (the case study uses Ethereum Sepolia) and has to stay up to date with the latest events emitted by the rln smart contract. These events are used to locally construct a tree that contains all members allowed to create valid proofs to send messages. Some numbers:
55-
5651
- A tree with 10k members takes `2Mbytes` of space. Negligible.
5752
- A tree with 10k members takes `<4` minutes to synchronize. Assumable since it's done just once.
5853
- With a block range of 5000 blocks for each request, we would need `520 requests` to synchronize 1 year of historical data from the tree. Assumable since most of the free endpoints out there allow 100k/day.
5954

6055
## Performance relay vs. rln-relay
6156

6257
Same simulation with 100 nodes was executed `with rln` and `without rln`:
63-
6458
- Memory consumption is almost identical
6559

6660
**with rln**

0 commit comments

Comments
 (0)