-
Notifications
You must be signed in to change notification settings - Fork 1
Refactor ServerPool to use StringArrayHashMapUnmanaged #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Replace separate ArrayList + StringHashMap with unified StringArrayHashMapUnmanaged - Add key field to Server struct to store host:port for efficient lookups - Update getNextServer to use stored keys with fetchSwapRemove/putAssumeCapacity - Remove getCurrentServer method as it's no longer needed - Add getFirstServer helper method for cleaner code - Fix shuffle implementation to work with ArrayHashMap using entries.swap + reIndex - All tests passing with improved performance and cleaner API
WalkthroughRefactors server pooling to use a host:port keyed StringArrayHashMap for server storage, adds per-server computed key, updates add/shuffle/rotation logic and memory management, and replaces URL hash map. Also removes a single blank line in the connection reconnection path. Public API adjusted with new/removed functions and fields. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Caller
participant SP as ServerPool
participant S as Server
Note over SP: addServer(url_str, is_implicit)
C->>SP: addServer(url_str, is_implicit)
SP->>S: init(url_str)
S-->>SP: key = "host:port"
SP->>SP: getOrPut(key)
alt key already exists
SP->>S: deinit() / destroy duplicate
SP-->>C: return
else key new
alt first explicit URL
SP->>S: set default user/pwd
end
SP-->>C: return
end
sequenceDiagram
autonumber
participant C as Consumer
participant SP as ServerPool
participant S as Current Server
Note over SP: getNextServer(current, max_reconnect)
C->>SP: getNextServer(current, max_reconnect)
alt current is null
SP-->>C: getFirstServer()
else current exists
alt max_reconnect < 0 or current.reconnects < max_reconnect
SP->>SP: remove current by key
SP->>SP: re-insert at end
SP-->>C: getFirstServer()
else exceeded max
SP->>SP: remove current by key
SP->>S: deinit() / free key / destroy
SP-->>C: getFirstServer()
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/server_pool.zig (3)
118-124: Defensive guard: avoid.?panic if key not found
fetchSwapRemove(current_srv.key).?assumes the key is always present. Ifcurrent_servergets out of sync (e.g., external misuse, or future changes), this will panic in Debug or worse in Release. Cheap guard keeps behavior predictable by falling back to current-first selection.- // Remove using the stored key and get the mutable server back - const removed_server = self.servers.fetchSwapRemove(current_srv.key).?; + // Remove using the stored key and get the mutable server back + const maybe_removed = self.servers.fetchSwapRemove(current_srv.key); + if (maybe_removed == null) { + // If not found, gracefully pick the first available server + return self.getFirstServer(); + } + const removed_server = maybe_removed.?; @@ - const removed_server = self.servers.fetchSwapRemove(current_srv.key).?; + const maybe_removed = self.servers.fetchSwapRemove(current_srv.key); + if (maybe_removed == null) { + return self.getFirstServer(); + } + const removed_server = maybe_removed.?;Would you like me to push this safety change as a small follow-up?
Also applies to: 125-129
148-156: Unify RNG API and seed casting with the rest of the codebaseElsewhere (connection.zig) you use
std.Random.DefaultPrng. Here you usestd.rand.DefaultPrngand seed via@bitCast(std.time.nanoTimestamp()), which is brittle across Zig versions and integer widths. Preferstd.Random.DefaultPrngand a clear cast frommilliTimestamp()for consistency and portability.- var prng = std.rand.DefaultPrng.init(@bitCast(std.time.nanoTimestamp())); - const random = prng.random(); + var prng = std.Random.DefaultPrng.init(@intCast(u64, std.time.milliTimestamp())); + const random = prng.random();No functional behavior change, just consistency and safer seeding.
Also applies to: 159-160
146-160: Shuffle over entries + reIndex is correct; minor note on offset boundsThe Fisher–Yates shuffle over
entriesfollowed byreIndexpreserves order-aware hashing. Theoffsetguard is correct. After adopting the RNG unification (above), this is good to go.If you want, I can add a small unit test to verify
shuffle(offset)preserves the prefix and rehashes correctly (e.g., lookups by key still succeed post-shuffle).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
src/connection.zig(1 hunks)src/server_pool.zig(5 hunks)
🔇 Additional comments (5)
src/connection.zig (1)
1090-1096: No-op whitespace tweak is fineThe blank-line removal around restarting the flusher thread is harmless and keeps the reconnection sequence intact (flags set under mutex, then spawn). No action needed.
src/server_pool.zig (4)
23-39: Keyed Server lifecycle looks solidStoring
host:portinServer.key, allocating ininitand freeing indeinit, is clean. Theerrdeferpaths correctly unwind bothUrlandkeyallocations. This dovetails well with the hashmap-based pool.Also applies to: 41-46
50-59: Switch to StringArrayHashMapUnmanaged simplifies pool and reduces lookupsConsolidating to a single ordered string-keyed map is a good move: O(1) lookup/rotation via key, fewer structures to maintain, and simpler
deinit. Usingiterator()+destroyindeinitis correct; keys are then released byservers.deinit(allocator).Also applies to: 63-74
139-142: getFirstServer helper is clear and efficientUsing
values()[0]maintains insertion order semantics and matches your rotation logic. Straightforward and readable.
107-133: Interaction note: reconnection counters drive removal now
getNextServerreadscurrent_srv.reconnectsto decide rotation vs. removal. Given the current connection logic incrementsreconnectsin multiple places during errors, verify that you don’t effectively halve the intendedmax_reconnectby double-incrementing per failed attempt. If needed, centralize the increment in one place.I can scan the repo and propose an adjustment if you want to align with the C client’s semantics.
| // Remove the server permanently (like C library lines 104-106) | ||
| const removed_server = self.servers.fetchSwapRemove(current_srv.key).?; | ||
| removed_server.value.deinit(self.allocator); | ||
| self.allocator.destroy(removed_server.value); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory leak on permanent removal: free removed key slice
When a server exceeds max_reconnect, you destroy the *Server but don’t free the map-owned key returned by fetchSwapRemove. With StringArrayHashMapUnmanaged, the removed key’s memory becomes caller-owned and must be freed. Add a free to avoid leaking the key.
} else {
// Remove the server permanently (like C library lines 104-106)
- const removed_server = self.servers.fetchSwapRemove(current_srv.key).?;
- removed_server.value.deinit(self.allocator);
- self.allocator.destroy(removed_server.value);
+ const removed_server = self.servers.fetchSwapRemove(current_srv.key).?;
+ // Free the key slice that was owned by the map
+ self.allocator.free(removed_server.key);
+ // Tear down the server
+ removed_server.value.deinit(self.allocator);
+ self.allocator.destroy(removed_server.value);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Remove the server permanently (like C library lines 104-106) | |
| const removed_server = self.servers.fetchSwapRemove(current_srv.key).?; | |
| removed_server.value.deinit(self.allocator); | |
| self.allocator.destroy(removed_server.value); | |
| } | |
| } else { | |
| // Remove the server permanently (like C library lines 104-106) | |
| const removed_server = self.servers.fetchSwapRemove(current_srv.key).?; | |
| // Free the key slice that was owned by the map | |
| self.allocator.free(removed_server.key); | |
| // Tear down the server | |
| removed_server.value.deinit(self.allocator); | |
| self.allocator.destroy(removed_server.value); | |
| } |
🤖 Prompt for AI Agents
In src/server_pool.zig around lines 125 to 129, when permanently removing a
server you call self.servers.fetchSwapRemove(current_srv.key) and deinit/destroy
the removed_server.value but forget to free the map-owned key slice returned by
fetchSwapRemove; for StringArrayHashMapUnmanaged the removed key becomes
caller-owned and must be freed. After obtaining removed_server from
fetchSwapRemove and before leaving the block, call the allocator.free on the
removed_server.key (or appropriate slice free API) to release the key memory,
ensuring you free it only once and handle any null/empty cases safely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub fn deinit(self: *Server, allocator: Allocator) void {
self.parsed_url.deinit();
allocator.free(self.key);
if (self.tls_name) |t| allocator.free(t);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lalinsky You're absolutely right! I apologize for the confusion. Looking at the Server.deinit() method, I can see that allocator.free(self.key) is already being called, which means the key memory is properly freed when removed_server.value.deinit(self.allocator) executes.
The memory management is correct as implemented:
fetchSwapRemove()returns the key-value pair whereremoved_server.keypoints to the same memory asremoved_server.value.key- When
removed_server.value.deinit()is called, it frees the key viaallocator.free(self.key) - No additional
free()call is needed on the removed key
Thank you for the clarification - the code handles memory cleanup properly through the Server's deinit method.
Summary
Changes
keyfield to store host:port string, updated init/deinit accordinglyArrayList(*Server)+StringHashMap(void)withStringArrayHashMapUnmanaged(*Server)fetchSwapRemove/putAssumeCapacityentries.swap()+reIndex()Benefits
Testing
Summary by CodeRabbit
Refactor
Bug Fixes
Style