-
-
Notifications
You must be signed in to change notification settings - Fork 773
feat(hedging): support hedging in order to reduce latency when quering endpoints #1081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v3
Are you sure you want to change the base?
Conversation
…g endpoints Signed-off-by: Ahmet DEMIR <[email protected]>
jeevatkm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahmet2mir Thanks for the PR. I apologize for the delayed response. I’ll review your PR notes and implementation, and I’ll get back to you soon.
jeevatkm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahmet2mir, I’ve reviewed the PR code changes and your notes. Please find my perspective and suggestions for code improvements.
- It’s great to understand your thought process and design insights. Thanks for sharing about Grafana, etc.
- I wouldn’t recommend disabling retry by default. However, if we disable it by default and provide an option to enable it, then I’m okay with that. Having a fallback mechanism for retry is practical, rather than creating a new Resty client, which isn’t an appropriate approach. Once all the hedged requests fail, a full retry loop can take over.
- I don’t include any libraries in the Resty except Go’s built-in libraries and packages from golang.org.
- I agree with you to restrict support to read-only HTTP methods. However, we should also provide an option to enable other methods, similar to how Resty does in many places. This way, users have the flexibility to choose the methods they want to use. May be its more appropriate to rename the method to
isReadyOnlyMethodinstead ofisSafeMethod. - I suggest removing the existing new fields from the client and instead adding them to the unexported hedging struct. This way, we’ll have only one new field for hedging. When enabling, we should create an instance of the hedging struct and assign it to the client.
- I need to think about the approach, "how to keep hedging flow and keep existing flow as fallback flow.” I need some time to think about it. Of course, feel free to suggest one.
- I reviewed the source code on (https://github.com/cristalhq/hedgedhttp) and the PR implementation. I noticed significant changes and improvements. Therefore, I suggest the following: (Also, please add a single line of space before the line “package resty”).
// Copyright (c) 2015-present Jeevanandam M ([email protected]), All rights reserved. // 2025 Ahmet Demir (https://github.com/ahmet2mir) // resty source code and usage is governed by a MIT style // license that can be found in the LICENSE file. // SPDX-License-Identifier: MIT package resty // This hedging implementation draws inspiration from the reference provided here: https://github.com/cristalhq/hedgedhttp. ... - In the go.mod, keep only
go 1.23.0, please remove thego 1.24.0line and toolchain line. - It’s fine to keep the hedging server within the
hedging_test.gofile. However, please avoid duplicating the code within test function. Instead, I suggest creating thecreateHeadingServermethod and using thecreateTestServermethod within wrap the function instead ofhttptest.NewServer(http.HandlerFunc( - Do we really need to use the rate limiter package here for the wait?
22d847a to
0fa01f9
Compare
|
Hi, thanks for detailed feedback, i try to fix based on remarks,
client.
EnableHedging(50*time.Millisecond, 5, 10.0).
SetRetryCount(1).
SetRetryWaitTime(2 * time.Second).
SetRetryMaxWaitTime(5 * time.Second)Any function using e.g. Will show
Hedging returns the first response to complete, not the first successful response. For example request #1 would have succeeded after 10 seconds, but it was cancelled when request #2 completed with an error at 5 seconds. A good approach is hedging with up to 3 requests (if we consider that every service on earth are running at least on 3 locations with good load balancing) Example on error greater or equal to 500 and ignore 4XX errors client.
EnableHedging(1*time.Second, 3, 3.0).
SetRetryCount(3).
SetRetryWaitTime(2 * time.Second).
SetRetryMaxWaitTime(5 * time.Second).
AddRetryCondition(func(r *resty.Response, err error) bool {
return r.StatusCode() >= 500
}) |
…ging func if not enabled, restore retry, allow unsafe funcs Signed-off-by: Ahmet DEMIR <[email protected]>
0fa01f9 to
14669c5
Compare
Hello, it's a first approach with edging.
On design I followed what Grafana does (as they have a huge experience on http requests (dashboard/connectors, mimir, loki, k6s etc) and i try to keep the client customizable.
IMHO hedging and retry must be mutual exclusive to avoid overloading a server, on user side, they could init a client with hedging if request fail, create a new resty client with "standard" retry .
Regarding "write" operation on requests, i prefer to not enable it, only on read, on write operation it could be dangerous (crystal accept write)
On testing part i created a hedging server, but i could move URI to createGenericServer if better.
I didn't included the lib hedging (no commit sicne last year but almost stable i think) but try to follow your styling and way of spliting the code, let me know if you prefer that way or if i use the external lib.
Simple test
I'm open to discuss if you have any concern or enhancement
Close #1050