fix(abcipp): reset activeNext when active pool is drained#483
Conversation
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Scanned FilesNone |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughPriorityMempool refactor: constructor now requires an AccountKeeper; adds RemovalReason and RemoveWithReason; introduces per-sender senderState, active/queued routing and promotion, event dispatch loop, background cleaning worker, expanded query API and invariants, tiered ordering, many tests, and documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Mempool as PriorityMempool
participant AK as AccountKeeper
participant App as BaseApp
participant Dispatcher as EventDispatcher
Client->>Mempool: Insert(tx)
Mempool->>AK: GetSequence(sender)
AK-->>Mempool: onChainSeq
Mempool->>Mempool: route tx -> active or queued (evict/requeue as needed)
alt contiguous promotions
Mempool->>Mempool: collectPromotableLocked -> promote to active
end
Mempool->>Dispatcher: enqueue EventTxInserted / EventTxRemoved
Dispatcher->>Client: deliver AppMempoolEvent
App->>Mempool: Prepare/Process proposal -> may call RemoveWithReason(tx, AnteRejectedInPrepare)
note right of Mempool: Cleaning worker uses App.GetContextForSimulate + AK to detect stale txs and calls RemoveWithReason
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #483 +/- ##
==========================================
+ Coverage 37.07% 37.72% +0.65%
==========================================
Files 311 320 +9
Lines 29362 29873 +511
==========================================
+ Hits 10885 11269 +384
- Misses 16742 16817 +75
- Partials 1735 1787 +52
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (5)
abcipp/docs/spec.md (1)
23-30: Minor grammar: consider hyphenating compound modifiers.For consistency with technical writing conventions:
- Line 23: "Per sender state" → "Per-sender state"
- Line 52 (not shown in changed lines but related): "active only senders" → "active-only senders"
📝 Suggested grammar fixes
- * Per sender state is unified in `senderState` structs (held in `senders map[string]*senderState`), with each containing: + * Per-sender state is unified in `senderState` structs (held in `senders map[string]*senderState`), with each containing:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/docs/spec.md` around lines 23 - 30, Change the phrasing to hyphenate compound modifiers: rename "Per sender state" to "Per-sender state" wherever the documentation describes the senderState structs (including the line referencing senders map[string]*senderState and fields active, queued, onChainSeq, activeMin/activeMax, queuedMin/queuedMax), and change "active only senders" to "active-only senders" (and any similar occurrences) including descriptions that reference nextExpectedNonce and cursor derivation; keep all symbol names (senderState, nextExpectedNonce, active, queued, onChainSeq, activeMin/activeMax, queuedMin/queuedMax) unchanged.abcipp/mempool_remove_test.go (1)
61-155: Add one queued-path regression test for removal bookkeeping.Current cases focus on active paths. Please add a queued-removal test that asserts
Contains == false, total count decrement, and invariants after removing/cleaning a queued nonce.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_remove_test.go` around lines 61 - 155, Add a new subtest to TestRemoveActiveStaleLockedBranches that exercises the queued-removal path: create a mempool and sender, construct a txEntry placed on the queued tier (use mp.selectTier to ensure it's queued), add it via mp.addEntryLocked (or the public Insert path), then call the queued-removal helper (the queued-path counterpart to removeActiveStaleLocked, e.g. removeQueuedStaleLocked or the mempool method that removes queued nonces) with an onChainSeq that removes that nonce; assert the sender's Contains flag for that nonce is false, mp.CountTx() decreases by one, and sender bookkeeping invariants (queued/active min/max and any queue length counters) are updated accordingly. Ensure you reference mp.addEntryLocked, mp.selectTier, txEntry, and the queued-removal method when adding the test so reviewers can locate the relevant code paths.abcipp/mempool_test_utils_test.go (1)
127-148: MakemockAccountKeepermap access thread-safe.This helper can be called from goroutines in cleanup/event-driven tests; unsynchronized map reads/writes can race under
-race.♻️ Proposed refactor
import ( "context" "fmt" + "sync" "testing" "time" @@ type mockAccountKeeper struct { + mu sync.RWMutex sequences map[string]uint64 } @@ func (m *mockAccountKeeper) SetSequence(addr sdk.AccAddress, seq uint64) { + m.mu.Lock() + defer m.mu.Unlock() m.sequences[string(addr.Bytes())] = seq } func (m *mockAccountKeeper) GetSequence(_ context.Context, addr sdk.AccAddress) (uint64, error) { + m.mu.RLock() + defer m.mu.RUnlock() key := string(addr.Bytes()) seq, ok := m.sequences[key]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_test_utils_test.go` around lines 127 - 148, The mockAccountKeeper's sequences map is not protected against concurrent access; make map access thread-safe by adding a sync.RWMutex field to mockAccountKeeper (update the struct returned by newMockAccountKeeper), use m.mu.Lock()/Unlock() in SetSequence and m.mu.RLock()/RUnlock() in GetSequence, and ensure GetSequence still returns the same errors/values; also update newMockAccountKeeper to initialize the mutex (no other logic changes).abcipp/mempool_event.go (1)
81-90: Avoid O(n²) queue draining ineventDispatchLoop.The current front-pop pattern repeatedly shifts the slice. Under load, this adds avoidable CPU and GC pressure.
♻️ Proposed refactor (batch drain)
for { - p.eventMu.Lock() - if len(p.eventQueue) == 0 { - p.eventMu.Unlock() - break - } - ev := p.eventQueue[0] - p.eventQueue[0] = cmtmempool.AppMempoolEvent{} - p.eventQueue = p.eventQueue[1:] - p.eventMu.Unlock() - - select { - case *chPtr <- ev: - case <-p.eventStop: - return - } + p.eventMu.Lock() + if len(p.eventQueue) == 0 { + p.eventMu.Unlock() + break + } + batch := p.eventQueue + p.eventQueue = nil + p.eventMu.Unlock() + + for _, ev := range batch { + select { + case *chPtr <- ev: + case <-p.eventStop: + return + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_event.go` around lines 81 - 90, The loop in eventDispatchLoop is doing O(n²) work by repeatedly shifting p.eventQueue (protected by p.eventMu) when popping the front; instead, batch-drain the queue under the lock: take a copy/reference to the current slice (e.g. queued := p.eventQueue), set p.eventQueue to nil or an empty slice while holding p.eventMu, then unlock and iterate over queued processing each ev. Make this change inside eventDispatchLoop so you no longer mutate the slice by removing index 0 repeatedly and you still honor synchronization with p.eventMu.abcipp/mempool_tier.go (1)
85-93: Prevent duplicate tier names from collapsing distribution counters.If two tiers resolve to the same
Name,initTierDistribution()merges them into one key, making per-tier metrics ambiguous.♻️ Proposed fix
func buildTierMatchers(cfg PriorityMempoolConfig) []tierMatcher { matchers := make([]tierMatcher, 0, len(cfg.Tiers)+1) + used := make(map[string]int, len(cfg.Tiers)) for idx, tier := range cfg.Tiers { if tier.Matcher == nil { continue } name := strings.TrimSpace(tier.Name) if name == "" { name = fmt.Sprintf("tier-%d", idx) } + if n, ok := used[name]; ok { + n++ + used[name] = n + name = fmt.Sprintf("%s-%d", name, n) + } else { + used[name] = 0 + } matchers = append(matchers, tierMatcher{ Name: name, Matcher: tier.Matcher, }) }Also applies to: 105-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_tier.go` around lines 85 - 93, The code that builds matchers (creating tierMatcher entries with Name := strings.TrimSpace(tier.Name) and appending to matchers) can produce duplicate Name keys which collapse distribution counters; update the logic in initTierDistribution (and the similar block at the other spot) to ensure each Name is unique before appending: track seen names (map[string]int) and if a Name is already present, append a deterministic suffix (e.g., "-{idx}" or a counter) to make it unique, then use that unique value for tierMatcher.Name while preserving tier.Matcher unchanged. Ensure you update both the shown block and the corresponding 105-112 block that performs the same operation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@abcipp/mempool_cleanup.go`:
- Line 16: The cleanup logic should use the mempool's own AccountKeeper instead
of a caller-supplied parameter: remove the ak parameter usage in
PriorityMempool.StartCleaningWorker and replace all references to the passed-in
ak with the mempool field p.ak (i.e., use PriorityMempool.ak) so cleanup
decisions are consistent with the mempool state; do the same fix for the other
functions in this file that accept or reference an external ak (the occurrences
noted around the other affected functions) to ensure all stale-removal logic
consistently uses p.ak.
- Around line 113-151: The AnteHandler cache state must be isolated per sender
so validation writes don't leak between map-iteration orders: before iterating
snap.entries for a given snap (from p.senders/snapshots), create a fresh
per-sender cache root from sdkCtx (e.g., senderCtx, senderWrite :=
sdkCtx.CacheContext()), then for each entry derive an inner cache via
senderCtx.WithTxBytes(...).WithIsReCheckTx(true).CacheContext(); call the inner
cache's write() on success so it only updates senderCtx, but do NOT call
senderWrite() to apply to the shared sdkCtx; repeat this per snap so each
sender's AnteHandler validation is deterministic and isolated.
In `@abcipp/mempool_event.go`:
- Around line 19-25: Wrap the channel-close check in StopEventDispatch with the
existing mutex to avoid a double-close race: acquire p.eventMu.Lock() before the
select (and defer/unlock after) so only one goroutine can test/close
p.eventStop; mirror the protection used in enqueueEvent which uses eventMu to
guard eventStop accesses. Ensure the select still checks <-p.eventStop to return
if already closed and only the locked path performs close(p.eventStop).
In `@abcipp/mempool_insert.go`:
- Around line 112-117: The active-replacement branch that checks existing,
hasExisting := p.entries[entry.key] currently uses entry.priority <
existing.priority and should reject equal-priority replacements; change that
comparison to entry.priority <= existing.priority so equal priority returns
without replacing (keep p.mtx.Unlock() and return nil). Apply the same
strictness to the queued-replacement check elsewhere that currently allows
equal-priority swaps (the comparison around the queued-entry priority logic) so
queued replacements also require strictly higher priority. Ensure you update
both places referencing p.entries, entry.key, and existing.priority so
equal-priority replacements are consistently rejected.
- Around line 23-30: The code computes bz from sdkCtx.TxBytes() (and via
p.txEncoder(tx)) but never attaches those bytes back to the context, causing
later tiering/capacity checks to use the original empty sdkCtx; after computing
bz (the variable set by p.txEncoder), call sdkCtx = sdkCtx.WithTxBytes(bz) (or
create a ctxWithBytes and use it) and use that context for all subsequent tier
selection and capacity checks so the same tx bytes are used consistently
(mirroring the promoted-entry usage that calls WithTxBytes(pe.bytes)); update
calls that perform tier selection/capacity checks to use the new context rather
than the raw sdkCtx.
In `@abcipp/mempool_remove.go`:
- Around line 66-76: The loop in removeQueuedStaleLocked scans from ss.queuedMin
to end (nonce-distance complexity) and can be very slow for sparse nonces;
instead iterate only over actual queued keys: scan ss.queued map, collect nonces
where nonce >= ss.queuedMin && nonce <= end into a slice, then delete those
entries, decrement p.queuedCount, call ss.setQueuedRangeOnRemoveLocked(nonce)
and append to removed for each collected nonce; this changes
removeQueuedStaleLocked to use O(`#queued` entries) work rather than O(nonce
range).
In `@abcipp/mempool_sender_state.go`:
- Around line 109-140: The current setQueuedRangeOnRemoveLocked scans
nonce-by-nonce across uint64 gaps which can be extremely slow; instead, when
removedNonce equals s.queuedMin or s.queuedMax, iterate only over the existing
keys of s.queued (range over the map) to compute the new min or max (skip the
removedNonce), and set s.queuedMin/s.queuedMax accordingly (or call
s.resetQueuedRangeLocked() if the map becomes empty). Update the logic in
setQueuedRangeOnRemoveLocked to avoid numeric loops and use map-key-based
min/max computation on s.queued to bound work by the number of queued entries
rather than nonce distance.
In `@abcipp/mempool.go`:
- Around line 166-167: The comment for PriorityMempool.PromoteQueued wrongly
states it refreshes on-chain sequence for "all tracked senders" while the
implementation only fetches sequences for queuedSenders; either update the
comment to accurately say it refreshes sequences for queuedSenders only or
extend PromoteQueued to also include activeOnlySenders in the refresh loop
(iterate over activeOnlySenders and fetch/update their on-chain sequence the
same way you do for queuedSenders), keeping the same sequence-fetching logic and
error handling so both sets are refreshed consistently.
---
Nitpick comments:
In `@abcipp/docs/spec.md`:
- Around line 23-30: Change the phrasing to hyphenate compound modifiers: rename
"Per sender state" to "Per-sender state" wherever the documentation describes
the senderState structs (including the line referencing senders
map[string]*senderState and fields active, queued, onChainSeq,
activeMin/activeMax, queuedMin/queuedMax), and change "active only senders" to
"active-only senders" (and any similar occurrences) including descriptions that
reference nextExpectedNonce and cursor derivation; keep all symbol names
(senderState, nextExpectedNonce, active, queued, onChainSeq,
activeMin/activeMax, queuedMin/queuedMax) unchanged.
In `@abcipp/mempool_event.go`:
- Around line 81-90: The loop in eventDispatchLoop is doing O(n²) work by
repeatedly shifting p.eventQueue (protected by p.eventMu) when popping the
front; instead, batch-drain the queue under the lock: take a copy/reference to
the current slice (e.g. queued := p.eventQueue), set p.eventQueue to nil or an
empty slice while holding p.eventMu, then unlock and iterate over queued
processing each ev. Make this change inside eventDispatchLoop so you no longer
mutate the slice by removing index 0 repeatedly and you still honor
synchronization with p.eventMu.
In `@abcipp/mempool_remove_test.go`:
- Around line 61-155: Add a new subtest to TestRemoveActiveStaleLockedBranches
that exercises the queued-removal path: create a mempool and sender, construct a
txEntry placed on the queued tier (use mp.selectTier to ensure it's queued), add
it via mp.addEntryLocked (or the public Insert path), then call the
queued-removal helper (the queued-path counterpart to removeActiveStaleLocked,
e.g. removeQueuedStaleLocked or the mempool method that removes queued nonces)
with an onChainSeq that removes that nonce; assert the sender's Contains flag
for that nonce is false, mp.CountTx() decreases by one, and sender bookkeeping
invariants (queued/active min/max and any queue length counters) are updated
accordingly. Ensure you reference mp.addEntryLocked, mp.selectTier, txEntry, and
the queued-removal method when adding the test so reviewers can locate the
relevant code paths.
In `@abcipp/mempool_test_utils_test.go`:
- Around line 127-148: The mockAccountKeeper's sequences map is not protected
against concurrent access; make map access thread-safe by adding a sync.RWMutex
field to mockAccountKeeper (update the struct returned by newMockAccountKeeper),
use m.mu.Lock()/Unlock() in SetSequence and m.mu.RLock()/RUnlock() in
GetSequence, and ensure GetSequence still returns the same errors/values; also
update newMockAccountKeeper to initialize the mutex (no other logic changes).
In `@abcipp/mempool_tier.go`:
- Around line 85-93: The code that builds matchers (creating tierMatcher entries
with Name := strings.TrimSpace(tier.Name) and appending to matchers) can produce
duplicate Name keys which collapse distribution counters; update the logic in
initTierDistribution (and the similar block at the other spot) to ensure each
Name is unique before appending: track seen names (map[string]int) and if a Name
is already present, append a deterministic suffix (e.g., "-{idx}" or a counter)
to make it unique, then use that unique value for tierMatcher.Name while
preserving tier.Matcher unchanged. Ensure you update both the shown block and
the corresponding 105-112 block that performs the same operation.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (1)
x/move/keeper/contracts/Move.tomlis excluded by!**/*.toml
📒 Files selected for processing (32)
abcipp/README.mdabcipp/checktx.goabcipp/docs/invariant.mdabcipp/docs/spec.mdabcipp/interface.goabcipp/mempool.goabcipp/mempool_cleanup.goabcipp/mempool_cleanup_test.goabcipp/mempool_event.goabcipp/mempool_event_test.goabcipp/mempool_insert.goabcipp/mempool_invariant.goabcipp/mempool_query.goabcipp/mempool_query_test.goabcipp/mempool_remove.goabcipp/mempool_remove_test.goabcipp/mempool_sender_state.goabcipp/mempool_sender_state_test.goabcipp/mempool_test.goabcipp/mempool_test_utils_test.goabcipp/mempool_tier.goabcipp/mempool_tier_test.goabcipp/proposal_test.goabcipp/proposals.goabcipp/query_server_test.goabcipp/test_helpers_test.goapp/abcipp.gox/move/keeper/binaries/StdCoinTest.mvx/move/keeper/binaries/TableGenerator.mvx/move/keeper/binaries/TestAddress.mvx/move/keeper/binaries/Vesting.mvx/move/keeper/binaries/submsg.mv
There was a problem hiding this comment.
♻️ Duplicate comments (1)
abcipp/mempool_remove.go (1)
66-79:⚠️ Potential issue | 🟠 Major
removeQueuedStaleLockedstill has nonce-distance scan risk.Line 69 scans
queuedMin..end, so sparse nonces can trigger very large cleanup loops even with few queued entries. This is still O(nonce range), not O(#queuedentries).Suggested bounded iteration fix
- // Intentional tradeoff: iterate by nonce range (queuedMin..end) for simple - // boundary-based state updates. This is bounded in practice by small queued - // limits (notably maxQueuedPerSender), so sender-local cleanup remains cheap. - for nonce := ss.queuedMin; nonce <= end; nonce++ { - entry, ok := ss.queued[nonce] - if !ok { - continue - } - delete(ss.queued, nonce) - p.queuedCount.Add(-1) - ss.setQueuedRangeOnRemoveLocked(nonce) - removed = append(removed, entry) - } + for nonce, entry := range ss.queued { + if nonce > end { + continue + } + delete(ss.queued, nonce) + p.queuedCount.Add(-1) + ss.setQueuedRangeOnRemoveLocked(nonce) + removed = append(removed, entry) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_remove.go` around lines 66 - 79, The loop in removeQueuedStaleLocked iterates from ss.queuedMin..end which is O(nonce range) and can be huge; instead, iterate over the actual entries in ss.queued to find nonces within the stale range and delete only those keys so complexity is O(`#queued` entries). Concretely, scan ss.queued map (for nonce, entry := range ss.queued) and if nonce <= end collect those nonces/entries, then for each collected nonce call delete(ss.queued, nonce), p.queuedCount.Add(-1), ss.setQueuedRangeOnRemoveLocked(nonce) and append to removed; keep logic around ss.queuedMin and end unchanged otherwise.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@abcipp/mempool_remove.go`:
- Around line 66-79: The loop in removeQueuedStaleLocked iterates from
ss.queuedMin..end which is O(nonce range) and can be huge; instead, iterate over
the actual entries in ss.queued to find nonces within the stale range and delete
only those keys so complexity is O(`#queued` entries). Concretely, scan ss.queued
map (for nonce, entry := range ss.queued) and if nonce <= end collect those
nonces/entries, then for each collected nonce call delete(ss.queued, nonce),
p.queuedCount.Add(-1), ss.setQueuedRangeOnRemoveLocked(nonce) and append to
removed; keep logic around ss.queuedMin and end unchanged otherwise.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (8)
abcipp/mempool.goabcipp/mempool_cleanup.goabcipp/mempool_cleanup_test.goabcipp/mempool_event.goabcipp/mempool_insert.goabcipp/mempool_remove.goabcipp/mempool_sender_state.goapp/abcipp.go
🚧 Files skipped from review as they are similar to previous changes (3)
- abcipp/mempool_sender_state.go
- abcipp/mempool_cleanup_test.go
- abcipp/mempool_cleanup.go
There was a problem hiding this comment.
🧹 Nitpick comments (1)
abcipp/mempool_cleanup.go (1)
77-124: Minor: Consider unconditional sender cleanup after state update.At line 120,
cleanupSenderLockedis only called when stale entries are removed. However, ifsetOnChainSeqLockedis called but no entries are actually stale, the sender state remains even though it might have become empty from other concurrent operations.This is likely benign since the loop at lines 86-91 already filters out fully empty senders, but for robustness, consider calling
cleanupSenderLockedunconditionally after line 119.♻️ Optional refactor
ss.setOnChainSeqLocked(onChainSeq) if staled := p.removeStaleLocked(ss, onChainSeq); len(staled) > 0 { p.enqueueRemovedEvents(staled) - p.cleanupSenderLocked(sender) } + p.cleanupSenderLocked(sender) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_cleanup.go` around lines 77 - 124, The loop in cleanUpEntries currently calls cleanupSenderLocked only when removeStaleLocked returns non-empty staled entries; update the logic so that after setting ss.setOnChainSeqLocked(onChainSeq) and handling any staled entries via p.removeStaleLocked and p.enqueueRemovedEvents, you call p.cleanupSenderLocked(sender) unconditionally (i.e., always attempt cleanup for that sender) to ensure empty sender state is removed even when no stales were found.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@abcipp/mempool_cleanup.go`:
- Around line 77-124: The loop in cleanUpEntries currently calls
cleanupSenderLocked only when removeStaleLocked returns non-empty staled
entries; update the logic so that after setting
ss.setOnChainSeqLocked(onChainSeq) and handling any staled entries via
p.removeStaleLocked and p.enqueueRemovedEvents, you call
p.cleanupSenderLocked(sender) unconditionally (i.e., always attempt cleanup for
that sender) to ensure empty sender state is removed even when no stales were
found.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (7)
abcipp/mempool.goabcipp/mempool_cleanup.goabcipp/mempool_cleanup_test.goabcipp/mempool_remove.goabcipp/mempool_sender_state.goabcipp/mempool_test.goapp/abcipp.go
🚧 Files skipped from review as they are similar to previous changes (1)
- abcipp/mempool_sender_state.go
…rank (priority+order) at activation
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
abcipp/mempool_tier_test.go (1)
80-83: Align this case with its stated intent (or rename it).Lines 80-83 claim to test nonce ordering under equal rank fields, but the fixture uses different priority/order (
1,999), so it actually tests priority/order dominance. Please either equalize rank fields for a true nonce test or adjust the comment/variable name accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/mempool_tier_test.go` around lines 80 - 83, The test case variable sameSenderLowerNonceButLowPriority and its comment are inconsistent: it intends to exercise nonce ordering when sender/tier/priority/FIFO rank are equal but mk(1, 1, 999, "sender-a", 2) uses a different priority/order (999) so it actually tests priority dominance; either change the mk call to match base's rank fields (use the same priority/order values as base so only the nonce differs) or rename the variable/comment to reflect that this is a priority/order dominance test; update the mk invocation or the comment/variable name accordingly and ensure compareEntries(base, ...) still asserts the intended ordering.abcipp/proposals.go (1)
111-114: Use cause-specific removal reasons instead of collapsing to ante-rejected.Line 177 applies
RemovalReasonAnteRejectedInPrepareto entries that were also added due to block-size/gas hard limits (Lines 111-114 and Lines 136-139). This conflates distinct removal causes and weakens reason-based observability/handling.♻️ Suggested shape
- txsToRemove []TxInfoEntry + txsToRemove []struct { + entry TxInfoEntry + reason RemovalReason + } @@ - txsToRemove = append(txsToRemove, TxInfoEntry{Tx: tx, Info: txInfo}) + txsToRemove = append(txsToRemove, struct { + entry TxInfoEntry + reason RemovalReason + }{ + entry: TxInfoEntry{Tx: tx, Info: txInfo}, + reason: RemovalReasonCapacityEvicted, + }) @@ - txsToRemove = append(txsToRemove, TxInfoEntry{Tx: tx, Info: txInfo}) + txsToRemove = append(txsToRemove, struct { + entry TxInfoEntry + reason RemovalReason + }{ + entry: TxInfoEntry{Tx: tx, Info: txInfo}, + reason: RemovalReasonCapacityEvicted, + }) @@ - txsToRemove = append(txsToRemove, TxInfoEntry{Tx: tx, Info: txInfo}) + txsToRemove = append(txsToRemove, struct { + entry TxInfoEntry + reason RemovalReason + }{ + entry: TxInfoEntry{Tx: tx, Info: txInfo}, + reason: RemovalReasonAnteRejectedInPrepare, + }) @@ - for _, entry := range txsToRemove { - snapshotHash := TxHash(entry.Info.TxBytes) - if currentHash, ok := h.mempool.Lookup(entry.Info.Sender, entry.Info.Sequence); ok && currentHash == snapshotHash { - if err := h.mempool.RemoveWithReason(entry.Tx, RemovalReasonAnteRejectedInPrepare); err != nil { + for _, item := range txsToRemove { + snapshotHash := TxHash(item.entry.Info.TxBytes) + if currentHash, ok := h.mempool.Lookup(item.entry.Info.Sender, item.entry.Info.Sequence); ok && currentHash == snapshotHash { + if err := h.mempool.RemoveWithReason(item.entry.Tx, item.reason); err != nil { h.logger.Error("failed to remove tx from mempool", "err", err) } } }Also applies to: 136-139, 177-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@abcipp/proposals.go` around lines 111 - 114, The code currently collapses distinct removal causes into RemovalReasonAnteRejectedInPrepare; instead, when appending txsToRemove in the block-size/gas checks (the TxInfoEntry additions in the size check and the gas/hard-limit branch where txsToRemove is appended) set a cause-specific RemovalReason (e.g., RemovalReasonTooLarge or RemovalReasonBlockLimitExceeded) on the TxInfoEntry instead of leaving it blank for later to be marked as AnteRejected; add or reuse a clear enum/const for these specific reasons, and ensure the later code that sets RemovalReasonAnteRejectedInPrepare (the code handling ante-rejection at the prepare stage) does not overwrite an existing, more specific RemovalReason already attached to the TxInfoEntry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@abcipp/docs/spec.md`:
- Line 23: The spec uses non-hyphenated compound modifiers; change "Per sender
state" to the hyphenated form "Per-sender state" in the sentence that references
`senderState` and the `senders map[string]*senderState` to fix compound-modifier
hyphenation, and make the same hyphenation adjustment at the other occurrence
referenced (Line 53) so both compound modifiers are consistently written with a
hyphen.
In `@abcipp/mempool.go`:
- Around line 223-230: PromoteQueued currently calls computeClampedRankLocked to
set pe.clampedPriority and pe.clampedOrder but then calls canAcceptLocked with
the original pe.priority (and not the clamped values), causing
admission/eviction to use an overestimated rank; update the PromoteQueued flow
so canAcceptLocked is invoked with pe.clampedPriority and pe.clampedOrder (or
ensure computeClampedRankLocked runs before any admission check and pass its
outputs into canAcceptLocked), modifying the call site that currently passes
pe.priority/pe.order to instead pass pe.clampedPriority/pe.clampedOrder and
adjust any related parameters to preserve semantics.
In `@integration-tests/e2e/mempool/queue_clear_e2e_test.go`:
- Around line 23-27: The current deadline adjustment in the test uses ctx,
cancel = context.WithDeadline(ctx, dl.Add(-10*time.Second)) which can produce a
past deadline and an immediately cancelled context; modify the logic around
t.Deadline() to compute an adjusted deadline = dl.Add(-10*time.Second) but clamp
it with time.Now(): if adjusted.Before(time.Now()) then use
time.Now().Add(smallGrace) or simply skip wrapping the context (i.e., leave ctx
unchanged) so you don't create an already-expired context; keep using
context.WithDeadline, cancel, and t.Cleanup(cancel) as the cleanup mechanism,
and apply the same guard to the other two files
(move_account_queue_clear_e2e_test.go and
move_table_generator_queue_clear_e2e_test.go).
---
Nitpick comments:
In `@abcipp/mempool_tier_test.go`:
- Around line 80-83: The test case variable sameSenderLowerNonceButLowPriority
and its comment are inconsistent: it intends to exercise nonce ordering when
sender/tier/priority/FIFO rank are equal but mk(1, 1, 999, "sender-a", 2) uses a
different priority/order (999) so it actually tests priority dominance; either
change the mk call to match base's rank fields (use the same priority/order
values as base so only the nonce differs) or rename the variable/comment to
reflect that this is a priority/order dominance test; update the mk invocation
or the comment/variable name accordingly and ensure compareEntries(base, ...)
still asserts the intended ordering.
In `@abcipp/proposals.go`:
- Around line 111-114: The code currently collapses distinct removal causes into
RemovalReasonAnteRejectedInPrepare; instead, when appending txsToRemove in the
block-size/gas checks (the TxInfoEntry additions in the size check and the
gas/hard-limit branch where txsToRemove is appended) set a cause-specific
RemovalReason (e.g., RemovalReasonTooLarge or RemovalReasonBlockLimitExceeded)
on the TxInfoEntry instead of leaving it blank for later to be marked as
AnteRejected; add or reuse a clear enum/const for these specific reasons, and
ensure the later code that sets RemovalReasonAnteRejectedInPrepare (the code
handling ante-rejection at the prepare stage) does not overwrite an existing,
more specific RemovalReason already attached to the TxInfoEntry.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (18)
Makefileabcipp/README.mdabcipp/checktx.goabcipp/docs/invariant.mdabcipp/docs/spec.mdabcipp/mempool.goabcipp/mempool_insert.goabcipp/mempool_invariant.goabcipp/mempool_query_test.goabcipp/mempool_queue_clear_fuzz_test.goabcipp/mempool_sender_state.goabcipp/mempool_tier.goabcipp/mempool_tier_test.goabcipp/proposal_test.goabcipp/proposals.gointegration-tests/e2e/mempool/move_account_queue_clear_e2e_test.gointegration-tests/e2e/mempool/move_table_generator_queue_clear_e2e_test.gointegration-tests/e2e/mempool/queue_clear_e2e_test.go
✅ Files skipped from review due to trivial changes (1)
- abcipp/docs/invariant.md
🚧 Files skipped from review as they are similar to previous changes (3)
- abcipp/mempool_tier.go
- abcipp/mempool_invariant.go
- abcipp/mempool_query_test.go
Description
Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
!in the type prefix if API or client breaking changeReviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
I have...