This is the Step 1 plan based on the current design docs. The aim is an idiomatic Rust structure with a single-owner core loop and explicit message passing.
config- Parse config + LAN/WAN profiles + limits.
- Load peer list, storage paths, ops endpoints.
protocolFrameHeaderencode/decode.- Message type constants + flags.
- Payload codecs (protobuf/packed) kept separate.
net- TCP accept/connect and per-connection tasks.
- Reader: parse header, read payload, emit
CoreEvent::InboundFrame. - Writer: drain priority queues (0..3) with backpressure.
core- Single-owner event loop for consensus, WAL, state machine.
- Owns
RaftState,Wal,KvState,LeaseTable,WatchRegistry.
raft- Role transitions + leader/follower logic + backtrack hints.
log- WAL segment manager, record encoding/decoding, recovery scan.
- Fsync policies (always / group_commit / never).
snapshot- Snapshot read/write format and install protocol.
state- Deterministic apply rules for PUT/DEL/TXN/LEASE_*.
watch- Watch registry, bounded queues, BEHIND policy.
client- Client request handling + RESP encoding + dedupe cache.
ops(optional)- HTTP endpoints: health/ready/metrics/status.
NodeCore is the single mutable owner of all consensus-critical state:
- Consensus/role state:
role,term,voted_for,next_index,match_index - WAL handle and metadata
- KV state + global revision + key revisions
- Lease table and expiry queue
- Watch registry + per-watcher queues
- Commit/apply indices
Everything else sends events to the core via bounded channels.
- Listener task
- Accepts TCP connections, spawns per-connection tasks.
- Connection reader
- Parses
FrameHeader, reads payload, sends to core.
- Parses
- Connection writer
- Consumes outbound frames with priorities.
- Core task
- Single event loop receives
CoreEvents from net/timers/client.
- Single event loop receives
Use bounded tokio::mpsc for explicit backpressure.
InboundFrame { peer_id, header, payload }ClientRequest { client_id, request_id, msg }TickHeartbeatTickElectionWalAppended { index }SnapshotComplete { last_included_index }
NetSend { peer_id, frame, priority }ClientResponse { client_id, resp }
- Inbound APPEND
- Validate term, reset election timer, check prev_log.
- Append to WAL, update commit_index.
- Apply committed entries, ACK with match_index.
- Client PUT/DEL/TXN
- Append WAL entry, replicate, wait for quorum.
- Fsync per policy, apply, RESP.
- Heartbeats
- Periodic APPEND(no entries) to followers.
- Elections
- On timeout, transition to candidate, send VOTE_REQ.
Keep correctness in core, offload heavy work to background tasks:
- Snapshot encoding/writing.
- Optional compression / checksum verification.
Background tasks return results to core via events.
protocol(FrameHeader + constants)net(connection read/write, priority queues)coreskeleton (event loop + channel wiring)log(WAL append + recovery scan)raft(roles + election timers)state(apply rules)client(RESP + dedupe)snapshot(install + compaction)watch(queue + BEHIND)ops(metrics, health, status)
This plan is intended to be idiomatic, minimal in shared mutability, and easy to reason about during interviews.