-
Notifications
You must be signed in to change notification settings - Fork 26
[WIP] - Implementing connection pooling #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[WIP] - Implementing connection pooling #7
Conversation
WalkthroughThis update introduces a connection pooling mechanism in the main application file and refines the client connection management. The changes replace the previous single connection approach with a more robust, pooled model that manages network connections using a dedicated pool structure, mutexes, and channels. In addition, basic error handling improvements and retry logic are integrated. A new entry is also added to the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant PoolManager as Pool
participant Conn as Connection
Client->>PoolManager: Request connection (getOrCreatePool)
PoolManager-->>Client: Return pooled connection (leaseConnectionFromPool)
Client->>Conn: Execute command (fire)
Conn-->>Client: Return response
Client->>PoolManager: Return connection to pool (returnToConnectionPool)
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (9)
main.go (9)
17-19
: Consider makingpoolSize
configurable.Currently, the maximum connection limit is hardcoded to 10. You mentioned a future plan for a
WithMaxConnections
option; implementing that now or adding a placeholder config would improve flexibility.
21-24
: Evaluate the use ofsync.Map
vs. a standard map with a mutex.
sync.Map
can be beneficial for certain concurrent use cases, but in some scenarios, a well-structured map protected by a single mutex might be simpler and perform better. Consider verifying if the read-mostly patterns ofsync.Map
apply here.
27-29
: Inconsistent naming of exported vs. unexported fields.
Id
is exported, whereashost
andport
are not. If they are intended to be read outside this package, consider capitalizing them. Otherwise, consider makingId
unexported for consistency.
34-36
: Ensure concurrency access to pool fields is handled consistently.
pool.mu
protectsp.available
, but you may need additional documentation or checks for other fields likehost
andport
to avoid race conditions.
78-109
: Improve error messaging inNewClient
.When the handshake fails, you return a generic error message. Consider adding context about the host/port to simplify debugging.
170-184
: Ensure consistent connection closure handling.Returning a connection to the pool or closing it is correct, but consider factoring logging for these events to a structured logger instead of
fmt.Println
for production readiness.
186-205
: Synchronizedfire
might degrade throughput.A single
fireCmdMutex
gating all command writes can become a bottleneck. Consider using per-connection synchronization or a more granular approach if throughput becomes a concern.
233-235
: Avoid repeatedly callingio.EOF.Error()
.Repeatedly allocating the string from
io.EOF.Error()
can be replaced with a constant if performance is critical. The current approach is fine, but it’s a minor optimization.
294-309
: Logging approach incheckAndReconnect
.Using
fmt.Println
for error reporting and reconnection status is workable for development. For production, consider switching to a structured logger or a dedicated logging library.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.gitignore
(1 hunks)main.go
(6 hunks)
🔇 Additional comments (6)
.gitignore (1)
1-1
: Add IDE directory to .gitignore.This addition is appropriate to keep JetBrains IDE settings out of version control.
main.go (5)
43-45
: Key generation logic appears sound.Combining host and port via
fmt.Sprintf("%s:%d", host, port)
provides a clear unique identifier.
47-61
: Validate pool creation workflow.The
LoadOrStore
approach is safe for concurrency. Just ensure that external calls (e.g., potential expansions to warm connections) remain mutually coherent when multiple goroutines callgetOrCreatePool
concurrently.
111-168
: Examine concurrency under load.Multiple goroutines calling
leaseConnectionFromPool
can trigger concurrent logic (e.g., line 133 capacity checks). The locking strategy seems correct, but concurrency load tests would ensure no race conditions or channel contention.Would you like to generate a shell script to grep for concurrency tests and highlight any potential coverage gaps?
206-231
: Single retry logic inFire
.Retriable connection errors are only retried once. This is likely sufficient, but confirm if users need more robust retry or backoff strategies for high-availability scenarios.
326-330
: Watch connection closure.You correctly set
c.watchConn = nil
after closing it, preventing double closure issues. The overall resource cleanup is good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (8)
main.go (8)
17-20
: Make the pool size configurable.Currently,
poolSize
is hardcoded to 10. It would be beneficial to introduce aWithMaxConnections(int)
option or similar approach to make this limit user-configurable.
34-40
: Consider additional fields or structures for pool management.The
pool
struct effectively manages concurrent usage. For advanced scenarios like minimum pools or async refreshes, consider extra state or channels for staleness checks, as noted in the PR objectives.
85-99
: Releasing the connection immediately after creation might reduce reuse benefits.In
NewClient
, you lease a connection but immediately defer returning it. Consider persisting a dedicated connection if you want each client to maintain a persistent channel to the pool.
101-144
: Connection leasing logic is sound, but consider a blocking wait or backoff option.Right now, if no connection is available, you immediately attempt to create a new one (if the pool isn't at capacity). Optionally, a short blocking interval or exponential backoff could help manage spikes in demand.
146-215
: Handshake code is repeated; consider a shared handshake function.You handle handshakes in multiple places. A single helper (e.g.,
performHandshake(conn, clientID, mode)
) could reduce duplication and improve maintainability.
237-290
: Retry logic inFire
is a good start, could be expanded.The single retry is helpful, but a loop or backoff might better handle sporadic network issues. If repeated attempts fail, you could provide clearer error messaging.
327-333
: Consider centralizing handshake logic for watch connections.You repeat the handshake approach for watch flows. A unified handshake helper could reduce duplication.
347-387
: Reconnect logic is functional, but adding backoff could improve resilience.If repeated errors occur, using an exponential backoff strategy helps prevent tight reconnect loops.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
main.go
(6 hunks)
🔇 Additional comments (9)
main.go (9)
22-24
: Use of sync.Map looks good.Storing pools keyed by host and port in a concurrent map is appropriate for thread-safe lookups and updates.
27-32
: Initialization of new Client fields is clear.Exposing
Id
publicly and addinghost
/port
fields match the new connection pooling design.
42-45
:pooledConn
structure is concise.Storing the last client ID helps ensure proper re-handshakes when reusing connections.
49-51
: Key generation for host/port is straightforward.This approach reliably identifies a unique pool per host and port.
53-68
: Creating or retrieving the pool is well-handled.The usage of
LoadOrStore
ensures thread-safe pool retrieval or creation. Keep in mind any cleanup strategy for stale or unused pools if needed.
217-235
: Effective strategy for returning or closing connections.Closing connections when the pool is full avoids resource leaks. However, if usage spikes temporarily, disposing of connections might be suboptimal. Consider waiting in a queue or tracking connection usage patterns.
292-294
: Connection error checks are straightforward.Validating EOF or EPIPE covers typical broken-pipe scenarios.
318-320
: Allocating a dedicated watch connection bypasses the pool.This may be intentional, but verify if you want watchers to also leverage the pool for consistency.
390-393
: Closing the watch connection is clean.Explicitly closing
watchConn
prevents resource leaks. This approach is consistent with the rest of the design.
@lucifercr07 sounds interesting. You'll have to pull latest changes though, since we changed implementation of creating a connection |
@mintobit will rebase the PR with new changes and update. |
Connection pooling implementation
(host:port)
.TODO
WithMaxConnections
as option so that it can be defined once when connection is init first time via SDK. Currently fixed to10
.Summary by CodeRabbit
New Features
Chores