-
Notifications
You must be signed in to change notification settings - Fork 2
feat: add reliability manager #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,339 @@ | ||||||||||||||||||
| import std/[times, locks, tables, sets, sequtils] | ||||||||||||||||||
| import chronos, results, chronicles | ||||||||||||||||||
| import ./[message, protobuf, reliability_utils, rolling_bloom_filter] | ||||||||||||||||||
|
|
||||||||||||||||||
| proc newReliabilityManager*( | ||||||||||||||||||
| channelId: SdsChannelID, config: ReliabilityConfig = defaultConfig() | ||||||||||||||||||
| ): Result[ReliabilityManager, ReliabilityError] = | ||||||||||||||||||
| ## Creates a new ReliabilityManager with the specified channel ID and configuration. | ||||||||||||||||||
| ## | ||||||||||||||||||
| ## Parameters: | ||||||||||||||||||
| ## - channelId: A unique identifier for the communication channel. | ||||||||||||||||||
| ## - config: Configuration options for the ReliabilityManager. If not provided, default configuration is used. | ||||||||||||||||||
| ## | ||||||||||||||||||
| ## Returns: | ||||||||||||||||||
| ## A Result containing either a new ReliabilityManager instance or an error. | ||||||||||||||||||
| if channelId.len == 0: | ||||||||||||||||||
| return err(ReliabilityError.reInvalidArgument) | ||||||||||||||||||
|
||||||||||||||||||
|
|
||||||||||||||||||
| try: | ||||||||||||||||||
| let bloomFilter = | ||||||||||||||||||
| newRollingBloomFilter(config.bloomFilterCapacity, config.bloomFilterErrorRate) | ||||||||||||||||||
|
|
||||||||||||||||||
| let rm = ReliabilityManager( | ||||||||||||||||||
| lamportTimestamp: 0, | ||||||||||||||||||
| messageHistory: @[], | ||||||||||||||||||
| bloomFilter: bloomFilter, | ||||||||||||||||||
| outgoingBuffer: @[], | ||||||||||||||||||
| incomingBuffer: @[], | ||||||||||||||||||
| channelId: channelId, | ||||||||||||||||||
| config: config, | ||||||||||||||||||
| ) | ||||||||||||||||||
| initLock(rm.lock) | ||||||||||||||||||
| return ok(rm) | ||||||||||||||||||
| except Exception: | ||||||||||||||||||
| error "Failed to create ReliabilityManager", msg = getCurrentExceptionMsg() | ||||||||||||||||||
| return err(ReliabilityError.reOutOfMemory) | ||||||||||||||||||
|
|
||||||||||||||||||
| proc reviewAckStatus(rm: ReliabilityManager, msg: SdsMessage) {.gcsafe.} = | ||||||||||||||||||
| var i = 0 | ||||||||||||||||||
| while i < rm.outgoingBuffer.len: | ||||||||||||||||||
| var acknowledged = false | ||||||||||||||||||
| let outMsg = rm.outgoingBuffer[i] | ||||||||||||||||||
|
|
||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To me this block of code will read a bit easier if you introduce a Assuming you parse the rollingBloomFilter once before entering the
Suggested change
etc. |
||||||||||||||||||
| # Check if message is in causal history | ||||||||||||||||||
| for msgID in msg.causalHistory: | ||||||||||||||||||
|
||||||||||||||||||
| if outMsg.message.messageId == msgID: | ||||||||||||||||||
| acknowledged = true | ||||||||||||||||||
| break | ||||||||||||||||||
|
|
||||||||||||||||||
| # Check bloom filter if not already acknowledged | ||||||||||||||||||
| if not acknowledged and msg.bloomFilter.len > 0: | ||||||||||||||||||
| let bfResult = deserializeBloomFilter(msg.bloomFilter) | ||||||||||||||||||
| if bfResult.isOk(): | ||||||||||||||||||
|
||||||||||||||||||
| let bfResult = deserializeBloomFilter(msg.bloomFilter) | |
| if bfResult.isOk(): | |
| let bloomFilter = deserializeBloomFilter(msg.bloomFilter).valueOr: | |
| error "Failed to deserialize bloom filter", error = $error | |
| let rbf = RollingBloomFilter( | |
| filter: bloomFilter, | |
| ... |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should avoid creating instances of RollingBloomFilter outside the rolling_bloom_filter.nim module.
Then, we need a special init proc, which will be in charge of creating such objects from a seq[byte] bloom filter.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why parse the RollingBloomFilter for each message being read in the outgoingBuffer loop? I think you can parse and create the bloom filter once before entering the while loop?
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's good practice to remove items from the buffer you are iterating over. I would suggest building a list of items that should be cleaned and then deleting them all at once after exiting the iteration loop. @Ivansete-status any suggestions here on general best practice in Nim?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think is better the approach you suggest @jm-clius.
I'd use something like: let newFilteredSeq = sequence.filterIt( ...)
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that I didn't initially consider the bloomFilter a good source for checking the dependencies of received messages as we already have a more reliable source (local history itself). Wouldn't it be possible to have a dependency check callback, perhaps implemented by the application, that takes a list of dependencies as argument and return the sublist of resolved dependencies after checking local history?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a first step, we can check the rm.messageHistory and only call the application-level callback if we require longer-term history. The application can also at that point try to retrieve still unresolved dependencies from Store nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Struggling a bit to see how the msgId could already be processed at this point. :)
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems strange that you have to iterate through the incomingBuffer again just to find this message, that you previously iterated over. Why not simply add the full message to the temporary readyToProcess buffer?
Either way, it might be useful to consider using a template like anyIt when iterating through a container to find a specific item: https://nim-lang.org/docs/sequtils.html#anyIt.t%2Cuntyped%2Cuntyped
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably this is a duplicate check? I wouldn't rely on bloomFilter unnecessarily here, as false positives could mean that we accidentally believe that we've seen this message before. I'd only use messageHistory here.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See point elsewhere. I don't think we should use the bloomFilter (where false positives are possible) if we have access to reliable information in local messageHistory.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this step is necessary if we're going to add the message to the incomingBuffer in any case. Presumably, you'd like to avoid the onMissingDependencies call? What is expected to happen within the onMissingDependencies call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking that if we simplify the incomingBuffer to be a map of messageId to a seq[SdsMessageID] of missing dependencies (we can cache the full message contents elsewhere), we could directly mark these dependencies as met by removing them from the missingDependencies seq of each message in the incoming buffer. Processing the incoming buffer then becomes simply finding and processing the messages for which we now have an empty missingDependencies seq. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend to create a new
reliability_manager.nimmodule and then all theReliabilityManagerfuncs/procs defined inreliability_utils.nim, to be moved into that newreliability_manager.nim.Also, for
ref objecttypes (heap allocated and GC'ed), the idiomatic way to create them is through anewproc. I think it is fine returning aResultbut better rename it tonew. See the following as an example: https://github.com/waku-org/nwaku/blob/addce8dc3338ea1ca7d6ce1dd525f6f50ccbf467/waku/node/delivery_monitor/recv_monitor.nim#L153-L157