-
Notifications
You must be signed in to change notification settings - Fork 114
feat(csharp): Add retry-after behavior for 503 responses in Spark ADBC driver #2664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the change! I've left feedback, some more optional than others.
Can we hold off submitting this until a decision is made on #2672? |
6d064ee
to
6906845
Compare
Add retry after handling in ADBC Spark driver fix pre commit check failures
6906845
to
0b4526b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the change! Please make sure the linter doesn't complain about whitespace issues and that the retry flag is properly handled.
bool tempUnavailableRetryValue = true; // Default to enabled | ||
// Parse retry configuration parameters | ||
if(Properties.TryGetValue(DatabricksParameters.TemporarilyUnavailableRetry, out string? tempUnavailableRetryStr)) | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you'd intended to parse or otherwise analyze the string configuration value? Right now, tempUnavailableRetryValue
is never set to anything other than its default value of true
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for mollifying the linter! This is still an open item.
Description
This PR implements retry-after behavior for the Spark ADBC driver when receiving 503 responses with Retry-After headers. This is particularly useful for Databricks clusters that may return 503 responses when a cluster is starting up or experiencing temporary unavailability.
Changes
adbc.spark.temporarily_unavailable_retry
(default: 1 - enabled)adbc.spark.temporarily_unavailable_retry_timeout
(default: 900 seconds)RetryHttpHandler
class that wraps the existingHttpClientHandler
to handle 503 responsesSparkHttpConnection
to use the new retry handlerImplementation Details
When a 503 response with a Retry-After header is received:
Testing
Added unit tests to verify: