Problem
When the unsafe block signer address is rotated in the L1 SystemConfig contract, op-node verifier nodes experience a stale signer window during which they reject valid blocks from the new signer.
The unsafe block signer is loaded as a runtime config — it's explicitly excluded from derivation (system_config.go:142-144) and instead fetched by a periodic background reload loop (node.go:454-487) that reads the value from L1 contract storage (eth_getStorageAt) at a configurable interval (default: 10 minutes).
During gossip validation, the block signature is checked against the currently loaded signer address (gossip.go:445-463). If the signer was rotated on L1 but the node hasn't reloaded yet, blocks signed by the new legitimate signer are rejected.
Worst-case activation delay: VerifierConfDepth × L1 block time + reload interval ≈ ~11 minutes with defaults.
During this window:
- Blocks from the new (legitimate) signer are rejected
- Blocks from the old signer continue to be accepted
- The node falls behind on the unsafe chain until the reload fires
Proposed Solution
Add a reactive reload mechanism: when a gossip block fails signature verification, trigger an early runtime config reload (rate-limited to prevent abuse).
Design
-
Reload channel: Pass a chan struct{} into BuildBlocksValidator alongside the existing GossipRuntimeConfig. On signature mismatch, send a non-blocking signal on this channel.
-
Rate limiting: The reloader goroutine in initRuntimeConfig consumes from this channel but enforces a minimum cooldown (e.g., 2 seconds) between reactive reloads to prevent L1 RPC abuse from attacker-crafted invalid blocks.
-
Confirmation depth: Reactive reloads still apply VerifierConfDepth — the signer rotation should be timed so that the L1 update is confirmed before the sequencers switch over.
-
Validation result: Use pubsub.ValidationIgnore instead of pubsub.ValidationReject when triggering a reactive reload, so the peer is not penalized for a potentially legitimate signer rotation.
This reduces worst-case activation delay to VerifierConfDepth × L1 block time + 2s cooldown.
Future improvement: per-peer grace window
A more nuanced approach would track per-peer invalid signature counts and allow each peer a small number (e.g., 3) of ValidationIgnore responses before escalating to ValidationReject. This would:
- Avoid penalizing honest peers during a signer rotation
- Still penalize peers that persistently send invalid signatures
- Reset the counter when the signer address changes (detected via
P2PSequencerAddress())
This could use an LRU cache of peer.ID → count, purged on signer change, similar to the existing seenBlocks pattern. This is left as a follow-up to keep the initial implementation simple.
But this may be more complex than necessary. Maybe we just put this behavior behind a --expect-unsafe-signer-change and only enable reactive runtime config reloading if this parameter is active, and only then use ValidationIgnore instead of ValidationReject.
🤖 Generated by Claude Code and Seb (human)
Problem
When the unsafe block signer address is rotated in the L1
SystemConfigcontract, op-node verifier nodes experience a stale signer window during which they reject valid blocks from the new signer.The unsafe block signer is loaded as a runtime config — it's explicitly excluded from derivation (
system_config.go:142-144) and instead fetched by a periodic background reload loop (node.go:454-487) that reads the value from L1 contract storage (eth_getStorageAt) at a configurable interval (default: 10 minutes).During gossip validation, the block signature is checked against the currently loaded signer address (
gossip.go:445-463). If the signer was rotated on L1 but the node hasn't reloaded yet, blocks signed by the new legitimate signer are rejected.Worst-case activation delay:
VerifierConfDepth × L1 block time + reload interval≈ ~11 minutes with defaults.During this window:
Proposed Solution
Add a reactive reload mechanism: when a gossip block fails signature verification, trigger an early runtime config reload (rate-limited to prevent abuse).
Design
Reload channel: Pass a
chan struct{}intoBuildBlocksValidatoralongside the existingGossipRuntimeConfig. On signature mismatch, send a non-blocking signal on this channel.Rate limiting: The reloader goroutine in
initRuntimeConfigconsumes from this channel but enforces a minimum cooldown (e.g., 2 seconds) between reactive reloads to prevent L1 RPC abuse from attacker-crafted invalid blocks.Confirmation depth: Reactive reloads still apply
VerifierConfDepth— the signer rotation should be timed so that the L1 update is confirmed before the sequencers switch over.Validation result: Use
pubsub.ValidationIgnoreinstead ofpubsub.ValidationRejectwhen triggering a reactive reload, so the peer is not penalized for a potentially legitimate signer rotation.This reduces worst-case activation delay to
VerifierConfDepth × L1 block time + 2s cooldown.Future improvement: per-peer grace window
A more nuanced approach would track per-peer invalid signature counts and allow each peer a small number (e.g., 3) of
ValidationIgnoreresponses before escalating toValidationReject. This would:P2PSequencerAddress())This could use an LRU cache of
peer.ID → count, purged on signer change, similar to the existingseenBlockspattern. This is left as a follow-up to keep the initial implementation simple.But this may be more complex than necessary. Maybe we just put this behavior behind a
--expect-unsafe-signer-changeand only enable reactive runtime config reloading if this parameter is active, and only then useValidationIgnoreinstead ofValidationReject.🤖 Generated by Claude Code and Seb (human)