You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: listener/docs/library_notifier.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ Integrate event consuming for specific logs and filters for zama components, and
6
6
7
7
## Logic:
8
8
9
-
This component consume blocks, transasctions, receipts from the different queues declared on rabbitmq, checks into its table to see if there is relevent filters or abi to watch, or even "from" or "to" sources if we need to watch for some transactions, register logs in a table, and forward to the internal logic of the components that need logs.
9
+
This component consume blocks, transasctions, receipts from the different queues declared on rabbitmq, checks into its table to see if there is relevant filters or abi to watch, or even "from" or "to" sources if we need to watch for some transactions, register logs in a table, and forward to the internal logic of the components that need logs.
10
10
11
11
Basically, the library is receipt parser.
12
12
You refer a watcher, and match the current watcher from the receipt if we need to get process an event.
@@ -37,7 +37,7 @@ For inspiration regarding this library, There is an existing implementation for
37
37
#### Minimal features:
38
38
39
39
- Persist block height and resilient to failure mode.
40
-
- Multichain by design (consuming multuple queues (blocks, transactions with receipts -> e.g logs) for each networks).
40
+
- Multichain by design (consuming multiple queues (blocks, transactions with receipts -> e.g logs) for each networks).
41
41
- Should consume all events even if they are not used (or rmq memory will grow).
42
42
- Declare notifiers for dynamical events ABI
43
43
- Store log watchers types into a postgres.
@@ -49,8 +49,8 @@ For inspiration regarding this library, There is an existing implementation for
49
49
- Declare multiple watcher with a number of block confirmations if a block confirmation number is required (RPC url could be required for this). (e.g finality, safe or n confirmations blocks) based on events.
50
50
- Ability to be aware of new available chain from rabbitmq.
51
51
- different types of watchers (logs, tx)
52
-
-OPTIONNAL: Cancel reorged events.
53
-
-OPTIONNAL: Replay past blocks (should not need this with rmq, since its queueing messages).
52
+
-OPTIONAL: Cancel reorged events.
53
+
-OPTIONAL: Replay past blocks (should not need this with rmq, since its queuing messages).
54
54
- Check altogether if problems with duplicate logs, and how to manage them in the zama internals (could be handled optionally) To get a unicity regarding logs and handle deduplication, we can if needed apply a semantic hash regarding log.
55
55
- Metrics
56
56
- Alerting.
@@ -64,7 +64,7 @@ For inspiration regarding this library, There is an existing implementation for
64
64
- table 1: watcher
65
65
- uuid, chainId, number of conf block ?, ABI, watcher type (tx, contract)
66
66
- table 2: logs
67
-
- uuid, watcher_uuid, block_number, released (TRUE, FALSE), log (deserialised or not), UNCLE? (Not mendatory if leverage on block confirmations)
67
+
- uuid, watcher_uuid, block_number, released (TRUE, FALSE), log (deserialised or not), UNCLE? (Not mandatory if leverage on block confirmations)
Copy file name to clipboardExpand all lines: listener/docs/listener_core.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,18 +14,18 @@
14
14
- Counter-intuitively, it is always the fresher information that will bring us the truth, especially regarding the past.
15
15
- Transaction receipts contains all the logs
16
16
- ReceiptRoot calculation, and block hash calculation ensure there is no missing logs for a given block.
17
-
- Zero websocket, not resilent.
17
+
- Zero websocket, not resilient.
18
18
19
19
## Goal
20
20
21
-
The goal of this core algorithm is to fetch (http polling), blocks, transactions, receipt, by polling, handle reorogs properly by checking hash and parent hash, fetch new informations if needed to be consistent and aware of canonical chain and broadcast blocks, and transaction to the message broker for the library to be aware of new events.
21
+
The goal of this core algorithm is to fetch (http polling), blocks, transactions, receipt, by polling, handle reorogs properly by checking hash and parent hash, fetch new information if needed to be consistent and aware of canonical chain and broadcast blocks, and transaction to the message broker for the library to be aware of new events.
22
22
23
23
## Logic / Algorithms
24
24
25
25
### Algorithm v1: Sequential poller and reorg checker
26
26
27
27
This is a descriptive of a basic algorithm, which could be sufficient with chains that produces blocks in more time than an http call duration.
28
-
This algorithm is sequential, and is just refered here for knowledge.
28
+
This algorithm is sequential, and is just referred here for knowledge.
29
29
30
30
If you need access to an existing implementation of this algorithm, ask and I will share you the implementation.
31
31
@@ -35,8 +35,8 @@ This algorithm leverages mostly on database, to perform checks, states updates,
35
35
2. we register block, transactions of this block, and associated receipt.
36
36
1. The receipt contains all the logs.
37
37
2. We broadcast the block, and the transactions with receipts to given queues with chainId, to get almost real time performance, for being consumed and filtered by the library notifier over abi filter and contract address
38
-
3. we compare current block parent hash, and previous block parent hash to detect if a reorg occured.
39
-
1. if it matches, we go back to the begining of the algorithm.
38
+
3. we compare current block parent hash, and previous block parent hash to detect if a reorg occurred.
39
+
1. if it matches, we go back to the beginning of the algorithm.
40
40
2. if it didn't match: Reorg is detected.
41
41
1. we fetch one by one all the previous blocks by hash, we broadcast events in the same fashion we did previously. (BACKTRACKING)
42
42
2. pass the other ones to UNCLES status.
@@ -45,7 +45,7 @@ This algorithm leverages mostly on database, to perform checks, states updates,
45
45
46
46
### Algorithm v2: Cursor Algorithm
47
47
48
-
The major flaw with the v1 iterative poller algorithm, is the block production time for faster chains, such as Arbitrum, Monad, or even Solana later could be fater than a single http response call, database operations, and network time calls if levraging on rabbitmq to trigger block fetch and polling operations, cumulated operations could be more than 100/200ms only in average time. It does not keep up with chain with a smaller block time duration.
48
+
The major flaw with the v1 iterative poller algorithm, is the block production time for faster chains, such as Arbitrum, Monad, or even Solana later could be faster than a single http response call, database operations, and network time calls if levraging on rabbitmq to trigger block fetch and polling operations, cumulated operations could be more than 100/200ms only in average time. It does not keep up with chain with a smaller block time duration.
49
49
Also, if later a full chain indexer is needed, this is impossible to leverage on the first algorithm.
50
50
51
51
Here is the proposed algorithm to address this flaw.
@@ -55,10 +55,10 @@ Here is the proposed algorithm to address this flaw.
55
55
Resolving the http latency, and ensure no event is missed.
2. or range given from an order to fetch the next block.
60
-
2. We spawn parallel task to fetch blocks (http polling), and register them in an in memory datastructure (slots for new blocks). And we fetch receipt for thoses blocks (strategy pattern could be required for diffrernt chains implems (`eth_getBlockReceipts` or `eth_getTransactionReceipt`for each trasnaction))
61
-
3.Optionnal: we recompute block hash: The rationale behind this, calculate receiptRoot and then block hash from receipt root and all other headers: this ensure that there is no inconsistency in receipts, hence logs contained in the receipts.
60
+
2. We spawn parallel task to fetch blocks (http polling), and register them in an in memory datastructure (slots for new blocks). And we fetch receipt for those blocks (strategy pattern could be required for diffrernt chains implems (`eth_getBlockReceipts` or `eth_getTransactionReceipt`for each transaction))
61
+
3.Optional: we recompute block hash: The rationale behind this, calculate receiptRoot and then block hash from receipt root and all other headers: this ensure that there is no inconsistency in receipts, hence logs contained in the receipts.
62
62
63
63
#### task two: cursor, reorg check and event broadcaster
64
64
@@ -78,14 +78,14 @@ Resolving the http latency, and ensure no event is missed.
78
78
- Event driven system to react to multiple events for the algorithm described above.
79
79
- strategy pattern (handling chains that doesn't support `eth_getBlockReceipts` method, and solana later)
80
80
- algorithm v2 implementation with eth_getBlockReceipt first.
81
-
- tables to store minimal metadata (blocks, transactions and receipts) with status (CANNONICAL, UNCLE).
81
+
- tables to store minimal metadata (blocks, transactions and receipts) with status (CANONICAL, UNCLE).
0 commit comments