Skip to content

Per Partition Circuit Breaker #40302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 106 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
106 commits
Select commit Hold shift + click to select a range
ff20cf9
change default read timeout
tvaron3 Feb 6, 2025
40e43c4
fix tests
tvaron3 Feb 6, 2025
faf6c27
Merge branch 'main' of https://github.com/Azure/azure-sdk-for-python …
tvaron3 Feb 6, 2025
aefe30b
Add read timeout tests for database account calls
tvaron3 Feb 6, 2025
9a234f8
fix timeout retry policy
tvaron3 Feb 6, 2025
8859c9f
Fixed the timeout logic
kushagraThapar Feb 6, 2025
8b166fc
Merge pull request #2 from tvaron3/tvaron3/readTimeout
kushagraThapar Feb 6, 2025
ac78da9
Fixed the timeout retry policy
kushagraThapar Feb 6, 2025
e8bc02e
Merge pull request #3 from tvaron3/readtimeout
kushagraThapar Feb 6, 2025
09aac90
Mock tests for timeout and failover retry policy
tvaron3 Feb 6, 2025
48a20fa
Merge branch 'tvaron3/readtimeout' of https://github.com/tvaron3/azur…
FabianMeiswinkel Feb 6, 2025
f22e7d2
Create test_dummy.py
FabianMeiswinkel Feb 7, 2025
dd8a466
Update test_dummy.py
FabianMeiswinkel Feb 7, 2025
8ac11c5
Update test_dummy.py
FabianMeiswinkel Feb 7, 2025
b53e2e9
Update test_dummy.py
FabianMeiswinkel Feb 7, 2025
973ec44
Iterating on fault injection tooling
FabianMeiswinkel Feb 7, 2025
f25af53
Merge branch 'main' of https://github.com/Azure/azure-sdk-for-python …
FabianMeiswinkel Feb 7, 2025
5d72848
Refactoring to have FaultInjectionTransport in its own file
FabianMeiswinkel Feb 7, 2025
8c9aa4b
Update test_dummy.py
FabianMeiswinkel Feb 10, 2025
7260e9d
Reafctoring FaultInjectionTransport
FabianMeiswinkel Feb 18, 2025
bf3e60b
Merge branch 'main' of https://github.com/Azure/azure-sdk-for-python …
FabianMeiswinkel Feb 19, 2025
0705aeb
Iterating on tests
FabianMeiswinkel Feb 19, 2025
baf7aea
Prettifying tests
FabianMeiswinkel Feb 20, 2025
e90b722
small refactoring
FabianMeiswinkel Feb 21, 2025
cb58896
Adding MM topology on Emulator
FabianMeiswinkel Feb 21, 2025
46ec31c
Adding cross region retry tests
FabianMeiswinkel Feb 22, 2025
f03f51f
Add Excluded Locations Feature
allenkim0129 Mar 31, 2025
cf42098
initial ppcb changes
tvaron3 Mar 31, 2025
9d46122
add missing changes
tvaron3 Apr 1, 2025
af6c72b
Merge with main
tvaron3 Apr 1, 2025
4efa9ad
fix mypy errors
tvaron3 Apr 1, 2025
d86d381
Refactored gem for ppcb and hooked up retryconfigurations with failur…
tvaron3 Apr 2, 2025
d6406f0
Merge branch 'main' of https://github.com/Azure/azure-sdk-for-python …
tvaron3 Apr 2, 2025
3d8ed69
merge with main
tvaron3 Apr 2, 2025
9e51011
fix use multiple write locations bug
tvaron3 Apr 2, 2025
1896338
merge excluded locations
tvaron3 Apr 2, 2025
8d27651
clean up and revert vs env variable changes
tvaron3 Apr 2, 2025
90fe5c2
remove async await
tvaron3 Apr 2, 2025
206be78
refactor and fix tests
tvaron3 Apr 2, 2025
622589f
Fix refactoring
tvaron3 Apr 2, 2025
4dd17ea
Fix tests
tvaron3 Apr 3, 2025
e631b74
fix tests
tvaron3 Apr 3, 2025
2d5f0d7
add more tests
tvaron3 Apr 3, 2025
f04a506
add more tests
tvaron3 Apr 3, 2025
bcee9cf
Add tests
tvaron3 Apr 3, 2025
80f4ddd
Merge branch 'main' of https://github.com/Azure/azure-sdk-for-python …
tvaron3 Apr 3, 2025
779f9d1
fix tests
tvaron3 Apr 3, 2025
b4db22e
fix tests
tvaron3 Apr 3, 2025
9a94d02
fix tests
tvaron3 Apr 3, 2025
9471b7a
Merge branch 'users/fabianm/tests' of https://github.com/FabianMeiswi…
tvaron3 Apr 3, 2025
eab1b63
fix test
tvaron3 Apr 3, 2025
93c2d7d
fix test
tvaron3 Apr 3, 2025
345f390
fix tests
tvaron3 Apr 3, 2025
fe74aa0
fix async in test
tvaron3 Apr 3, 2025
5bb9f1f
Added multi-region tests
kushagraThapar Apr 3, 2025
996217a
Fix _AddParitionKey to pass options to sub methods
allenkim0129 Apr 3, 2025
41fc917
Added initial live tests
allenkim0129 Apr 3, 2025
07b8f39
Updated live-platform-matrix for multi-region tests
allenkim0129 Apr 3, 2025
1b09739
initial sync version of fault injection
tvaron3 Apr 3, 2025
0f0a991
Merge branch 'users/fabianm/tests' of https://github.com/tvaron3/azur…
tvaron3 Apr 3, 2025
2fb3dc9
add all sync tests
tvaron3 Apr 3, 2025
7b81482
add new error and fix logs
tvaron3 Apr 3, 2025
f355e30
fix test
tvaron3 Apr 3, 2025
3056787
Merge branch 'users/fabianm/tests' of https://github.com/FabianMeiswi…
tvaron3 Apr 3, 2025
8495c51
Add cosmosQuery mark to TestQuery
allenkim0129 Apr 4, 2025
b29980c
Correct spelling
allenkim0129 Apr 4, 2025
5e79172
Fixed live platform matrix syntax
allenkim0129 Apr 4, 2025
fd40cd7
Changed Multi-regions
allenkim0129 Apr 4, 2025
85e1206
first ppcb test
tvaron3 Apr 4, 2025
96124fe
merge with main
tvaron3 Apr 4, 2025
34e3d82
fix test
tvaron3 Apr 7, 2025
ce14666
refactor due to pk range wrapper needing io call and pylint
tvaron3 Apr 7, 2025
7b939e8
Merge branch 'main' of https://github.com/Azure/azure-sdk-for-python …
tvaron3 Apr 7, 2025
b33cfb6
Merge branch 'user/allekim/feature/addExcludedLocations' of https://g…
tvaron3 Apr 7, 2025
29305f4
Added client level ExcludedLocation for async
allenkim0129 Apr 7, 2025
c77b4e7
Update Live test settings
allenkim0129 Apr 7, 2025
d82fa74
Added Async tests
allenkim0129 Apr 7, 2025
5610889
Add more live tests for all other Python versions
allenkim0129 Apr 7, 2025
f4cb8b3
Fix Async test failure
allenkim0129 Apr 8, 2025
e98ab57
add test for failure_rate threshold
tvaron3 Apr 8, 2025
9b0236d
Merge branch 'main' into user/allekim/feature/addExcludedLocations
allenkim0129 Apr 8, 2025
4f08168
Fix live test failures
allenkim0129 Apr 8, 2025
36407c6
fix pylint and cspell
tvaron3 Apr 8, 2025
4e2fd6b
Fix live test failures
allenkim0129 Apr 8, 2025
1baf872
fix pylint
tvaron3 Apr 8, 2025
e0dab29
Fix live test failures
allenkim0129 Apr 8, 2025
798c12f
Add test_delete_all_items_by_partition_key
allenkim0129 Apr 8, 2025
2c5b8fc
Remove test_delete_all_items_by_partition_key
allenkim0129 Apr 8, 2025
739e090
fix and add tests
tvaron3 Apr 9, 2025
d5c380a
add collection rid to batch
tvaron3 Apr 9, 2025
e7f7265
add partition key range id to partition key range to cache
tvaron3 Apr 9, 2025
38f8033
address failures
tvaron3 Apr 9, 2025
828a99b
update tests
tvaron3 Apr 9, 2025
2b9b58f
Added missing doc for excluded_locations in async client
allenkim0129 Apr 10, 2025
1c98b48
Remove duplicate functions
allenkim0129 Apr 10, 2025
b5accfa
add more operations
tvaron3 Apr 10, 2025
8324a71
Fix live tests with multi write locations
allenkim0129 Apr 11, 2025
b65f07d
Fixed bug with endpoint routing with multi write region partition key…
allenkim0129 Apr 11, 2025
4a144d9
Adding emulator tests for delete_all_items_by_partition_key API
allenkim0129 Apr 11, 2025
9c68f75
minimized duplicate codes
allenkim0129 Apr 14, 2025
225bc26
Added Async emulator tests
allenkim0129 Apr 14, 2025
a6e556e
Merge branch 'main' into user/allekim/feature/addExcludedLocations
allenkim0129 Apr 14, 2025
5f2c5a0
Nit: Changed test names
allenkim0129 Apr 14, 2025
c3e39e5
Addressed comments about documents
allenkim0129 Apr 15, 2025
fc8e58a
Merge branch 'user/allekim/feature/addExcludedLocations' of https://g…
tvaron3 Apr 16, 2025
39a464c
live tests
tvaron3 Apr 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions sdk/cosmos/azure-cosmos/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
### 4.10.0b5 (Unreleased)

#### Features Added
* Per partition circuit breaker support. It can be enabled through the environment variable `AZURE_COSMOS_ENABLE_CIRCUIT_BREAKER`. See [PR 40302](https://github.com/Azure/azure-sdk-for-python/pull/40302).

#### Breaking Changes

Expand Down
1 change: 1 addition & 0 deletions sdk/cosmos/azure-cosmos/azure/cosmos/_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@
'priority': 'priorityLevel',
'no_response': 'responsePayloadOnWriteDisabled',
'max_item_count': 'maxItemCount',
'excluded_locations': 'excludedLocations',
}

# Cosmos resource ID validation regex breakdown:
Expand Down
12 changes: 12 additions & 0 deletions sdk/cosmos/azure-cosmos/azure/cosmos/_constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,18 @@ class _Constants:
HS_MAX_ITEMS_CONFIG_DEFAULT: int = 1000
MAX_ITEM_BUFFER_VS_CONFIG: str = "AZURE_COSMOS_MAX_ITEM_BUFFER_VECTOR_SEARCH"
MAX_ITEM_BUFFER_VS_CONFIG_DEFAULT: int = 50000
CIRCUIT_BREAKER_ENABLED_CONFIG: str = "AZURE_COSMOS_ENABLE_CIRCUIT_BREAKER"
CIRCUIT_BREAKER_ENABLED_CONFIG_DEFAULT: str = "False"
# Only applicable when circuit breaker is enabled -------------------------
CONSECUTIVE_ERROR_COUNT_TOLERATED_FOR_READ: str = "AZURE_COSMOS_CONSECUTIVE_ERROR_COUNT_TOLERATED_FOR_READ"
CONSECUTIVE_ERROR_COUNT_TOLERATED_FOR_READ_DEFAULT: int = 10
CONSECUTIVE_ERROR_COUNT_TOLERATED_FOR_WRITE: str = "AZURE_COSMOS_CONSECUTIVE_ERROR_COUNT_TOLERATED_FOR_WRITE"
CONSECUTIVE_ERROR_COUNT_TOLERATED_FOR_WRITE_DEFAULT: int = 5
FAILURE_PERCENTAGE_TOLERATED = "AZURE_COSMOS_FAILURE_PERCENTAGE_TOLERATED"
FAILURE_PERCENTAGE_TOLERATED_DEFAULT: int = 90
STALE_PARTITION_UNAVAILABILITY_CHECK = "AZURE_COSMOS_STALE_PARTITION_UNAVAILABILITY_CHECK_IN_SECONDS"
STALE_PARTITION_UNAVAILABILITY_CHECK_DEFAULT: int = 120
# -------------------------------------------------------------------------

# Error code translations
ERROR_TRANSLATIONS: Dict[int, str] = {
Expand Down
65 changes: 42 additions & 23 deletions sdk/cosmos/azure-cosmos/azure/cosmos/_cosmos_client_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
HttpResponse # pylint: disable=no-legacy-azure-core-http-response-import

from . import _base as base
from . import _global_endpoint_manager as global_endpoint_manager
from ._global_partition_endpoint_manager_circuit_breaker import _GlobalPartitionEndpointManagerForCircuitBreaker
from . import _query_iterable as query_iterable
from . import _runtime_constants as runtime_constants
from . import _session
Expand Down Expand Up @@ -164,7 +164,7 @@ def __init__( # pylint: disable=too-many-statements
self.last_response_headers: CaseInsensitiveDict = CaseInsensitiveDict()

self.UseMultipleWriteLocations = False
self._global_endpoint_manager = global_endpoint_manager._GlobalEndpointManager(self)
self._global_endpoint_manager = _GlobalPartitionEndpointManagerForCircuitBreaker(self)

retry_policy = None
if isinstance(self.connection_policy.ConnectionRetryConfiguration, HTTPPolicy):
Expand Down Expand Up @@ -2043,7 +2043,8 @@ def PatchItem(
headers = base.GetHeaders(self, self.default_headers, "patch", path, document_id, resource_type,
documents._OperationType.Patch, options)
# Patch will use WriteEndpoint since it uses PUT operation
request_params = RequestObject(resource_type, documents._OperationType.Patch)
request_params = RequestObject(resource_type, documents._OperationType.Patch, headers)
request_params.set_excluded_location_from_options(options)
request_data = {}
if options.get("filterPredicate"):
request_data["condition"] = options.get("filterPredicate")
Expand Down Expand Up @@ -2131,7 +2132,8 @@ def _Batch(
base._populate_batch_headers(initial_headers)
headers = base.GetHeaders(self, initial_headers, "post", path, collection_id, "docs",
documents._OperationType.Batch, options)
request_params = RequestObject("docs", documents._OperationType.Batch)
request_params = RequestObject("docs", documents._OperationType.Batch, headers)
request_params.set_excluded_location_from_options(options)
return cast(
Tuple[List[Dict[str, Any]], CaseInsensitiveDict],
self.__Post(path, request_params, batch_operations, headers, **kwargs)
Expand Down Expand Up @@ -2190,8 +2192,9 @@ def DeleteAllItemsByPartitionKey(
path = '{}{}/{}'.format(path, "operations", "partitionkeydelete")
collection_id = base.GetResourceIdOrFullNameFromLink(collection_link)
headers = base.GetHeaders(self, self.default_headers, "post", path, collection_id,
"partitionkey", documents._OperationType.Delete, options)
request_params = RequestObject("partitionkey", documents._OperationType.Delete)
http_constants.ResourceType.PartitionKey, documents._OperationType.Delete, options)
request_params = RequestObject(http_constants.ResourceType.PartitionKey, documents._OperationType.Delete, headers)
request_params.set_excluded_location_from_options(options)
_, last_response_headers = self.__Post(
path=path,
request_params=request_params,
Expand Down Expand Up @@ -2362,7 +2365,8 @@ def ExecuteStoredProcedure(
documents._OperationType.ExecuteJavaScript, options)

# ExecuteStoredProcedure will use WriteEndpoint since it uses POST operation
request_params = RequestObject("sprocs", documents._OperationType.ExecuteJavaScript)
request_params = RequestObject("sprocs", documents._OperationType.ExecuteJavaScript, headers)
request_params.set_excluded_location_from_options(options)
result, self.last_response_headers = self.__Post(path, request_params, params, headers, **kwargs)
return result

Expand Down Expand Up @@ -2558,7 +2562,7 @@ def GetDatabaseAccount(

headers = base.GetHeaders(self, self.default_headers, "get", "", "", "",
documents._OperationType.Read,{}, client_id=self.client_id)
request_params = RequestObject("databaseaccount", documents._OperationType.Read, url_connection)
request_params = RequestObject("databaseaccount", documents._OperationType.Read, headers, url_connection)
result, last_response_headers = self.__Get("", request_params, headers, **kwargs)
self.last_response_headers = last_response_headers
database_account = DatabaseAccount()
Expand Down Expand Up @@ -2607,7 +2611,7 @@ def _GetDatabaseAccountCheck(

headers = base.GetHeaders(self, self.default_headers, "get", "", "", "",
documents._OperationType.Read,{}, client_id=self.client_id)
request_params = RequestObject("databaseaccount", documents._OperationType.Read, url_connection)
request_params = RequestObject("databaseaccount", documents._OperationType.Read, headers, url_connection)
self.__Get("", request_params, headers, **kwargs)


Expand Down Expand Up @@ -2646,7 +2650,8 @@ def Create(
options)
# Create will use WriteEndpoint since it uses POST operation

request_params = RequestObject(typ, documents._OperationType.Create)
request_params = RequestObject(typ, documents._OperationType.Create, headers)
request_params.set_excluded_location_from_options(options)
result, last_response_headers = self.__Post(path, request_params, body, headers, **kwargs)
self.last_response_headers = last_response_headers

Expand Down Expand Up @@ -2692,7 +2697,8 @@ def Upsert(
headers[http_constants.HttpHeaders.IsUpsert] = True

# Upsert will use WriteEndpoint since it uses POST operation
request_params = RequestObject(typ, documents._OperationType.Upsert)
request_params = RequestObject(typ, documents._OperationType.Upsert, headers)
request_params.set_excluded_location_from_options(options)
result, last_response_headers = self.__Post(path, request_params, body, headers, **kwargs)
self.last_response_headers = last_response_headers
# update session for write request
Expand Down Expand Up @@ -2735,7 +2741,8 @@ def Replace(
headers = base.GetHeaders(self, initial_headers, "put", path, id, typ, documents._OperationType.Replace,
options)
# Replace will use WriteEndpoint since it uses PUT operation
request_params = RequestObject(typ, documents._OperationType.Replace)
request_params = RequestObject(typ, documents._OperationType.Replace, headers)
request_params.set_excluded_location_from_options(options)
result, last_response_headers = self.__Put(path, request_params, resource, headers, **kwargs)
self.last_response_headers = last_response_headers

Expand Down Expand Up @@ -2776,7 +2783,8 @@ def Read(
initial_headers = initial_headers or self.default_headers
headers = base.GetHeaders(self, initial_headers, "get", path, id, typ, documents._OperationType.Read, options)
# Read will use ReadEndpoint since it uses GET operation
request_params = RequestObject(typ, documents._OperationType.Read)
request_params = RequestObject(typ, documents._OperationType.Read, headers)
request_params.set_excluded_location_from_options(options)
result, last_response_headers = self.__Get(path, request_params, headers, **kwargs)
self.last_response_headers = last_response_headers
if response_hook:
Expand Down Expand Up @@ -2815,7 +2823,8 @@ def DeleteResource(
headers = base.GetHeaders(self, initial_headers, "delete", path, id, typ, documents._OperationType.Delete,
options)
# Delete will use WriteEndpoint since it uses DELETE operation
request_params = RequestObject(typ, documents._OperationType.Delete)
request_params = RequestObject(typ, documents._OperationType.Delete, headers)
request_params.set_excluded_location_from_options(options)
result, last_response_headers = self.__Delete(path, request_params, headers, **kwargs)
self.last_response_headers = last_response_headers

Expand Down Expand Up @@ -3047,23 +3056,27 @@ def __GetBodiesFromQueryResult(result: Dict[str, Any]) -> List[Dict[str, Any]]:
initial_headers = self.default_headers.copy()
# Copy to make sure that default_headers won't be changed.
if query is None:
op_typ = documents._OperationType.QueryPlan if is_query_plan else documents._OperationType.ReadFeed
# Query operations will use ReadEndpoint even though it uses GET(for feed requests)
request_params = RequestObject(
resource_type,
documents._OperationType.QueryPlan if is_query_plan else documents._OperationType.ReadFeed
)
headers = base.GetHeaders(
self,
initial_headers,
"get",
path,
resource_id,
resource_type,
request_params.operation_type,
op_typ,
options,
partition_key_range_id
)

request_params = RequestObject(
resource_type,
op_typ,
headers
)
request_params.set_excluded_location_from_options(options)

change_feed_state: Optional[ChangeFeedState] = options.get("changeFeedState")
if change_feed_state is not None:
change_feed_state.populate_request_headers(self._routing_map_provider, headers)
Expand All @@ -3089,7 +3102,6 @@ def __GetBodiesFromQueryResult(result: Dict[str, Any]) -> List[Dict[str, Any]]:
raise SystemError("Unexpected query compatibility mode.")

# Query operations will use ReadEndpoint even though it uses POST(for regular query operations)
request_params = RequestObject(resource_type, documents._OperationType.SqlQuery)
req_headers = base.GetHeaders(
self,
initial_headers,
Expand All @@ -3102,6 +3114,9 @@ def __GetBodiesFromQueryResult(result: Dict[str, Any]) -> List[Dict[str, Any]]:
partition_key_range_id
)

request_params = RequestObject(resource_type, documents._OperationType.SqlQuery, req_headers)
request_params.set_excluded_location_from_options(options)

# check if query has prefix partition key
isPrefixPartitionQuery = kwargs.pop("isPrefixPartitionQuery", None)
if isPrefixPartitionQuery:
Expand Down Expand Up @@ -3256,7 +3271,7 @@ def _AddPartitionKey(
options: Mapping[str, Any]
) -> Dict[str, Any]:
collection_link = base.TrimBeginningAndEndingSlashes(collection_link)
partitionKeyDefinition = self._get_partition_key_definition(collection_link)
partitionKeyDefinition = self._get_partition_key_definition(collection_link, options)
new_options = dict(options)
# If the collection doesn't have a partition key definition, skip it as it's a legacy collection
if partitionKeyDefinition:
Expand Down Expand Up @@ -3358,15 +3373,19 @@ def _UpdateSessionIfRequired(
# update session
self.session.update_session(response_result, response_headers)

def _get_partition_key_definition(self, collection_link: str) -> Optional[Dict[str, Any]]:
def _get_partition_key_definition(
self,
collection_link: str,
options: Mapping[str, Any]
) -> Optional[Dict[str, Any]]:
partition_key_definition: Optional[Dict[str, Any]]
# If the document collection link is present in the cache, then use the cached partitionkey definition
if collection_link in self.__container_properties_cache:
cached_container: Dict[str, Any] = self.__container_properties_cache.get(collection_link, {})
partition_key_definition = cached_container.get("partitionKey")
# Else read the collection from backend and add it to the cache
else:
container = self.ReadContainer(collection_link)
container = self.ReadContainer(collection_link, options)
partition_key_definition = container.get("partitionKey")
self.__container_properties_cache[collection_link] = _set_properties_cache(container)
return partition_key_definition
23 changes: 12 additions & 11 deletions sdk/cosmos/azure-cosmos/azure/cosmos/_global_endpoint_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,10 @@

from . import _constants as constants
from . import exceptions
from ._request_object import RequestObject
from ._routing.routing_range import PartitionKeyRangeWrapper
from .documents import DatabaseAccount
from ._location_cache import LocationCache
from ._location_cache import LocationCache, current_time_millis


# pylint: disable=protected-access
Expand All @@ -50,19 +52,14 @@ def __init__(self, client):
self.DefaultEndpoint = client.url_connection
self.refresh_time_interval_in_ms = self.get_refresh_time_interval_in_ms_stub()
self.location_cache = LocationCache(
self.PreferredLocations,
self.DefaultEndpoint,
self.EnableEndpointDiscovery,
client.connection_policy.UseMultipleWriteLocations
client.connection_policy
)
self.refresh_needed = False
self.refresh_lock = threading.RLock()
self.last_refresh_time = 0
self._database_account_cache = None

def get_use_multiple_write_locations(self):
return self.location_cache.can_use_multiple_write_locations()

def get_refresh_time_interval_in_ms_stub(self):
return constants._Constants.DefaultEndpointsRefreshTime

Expand All @@ -72,7 +69,11 @@ def get_write_endpoint(self):
def get_read_endpoint(self):
return self.location_cache.get_read_regional_routing_context()

def resolve_service_endpoint(self, request):
def resolve_service_endpoint(
self,
request: RequestObject,
pk_range_wrapper: PartitionKeyRangeWrapper # pylint: disable=unused-argument
) -> str:
return self.location_cache.resolve_service_endpoint(request)

def mark_endpoint_unavailable_for_read(self, endpoint, refresh_cache):
Expand All @@ -98,7 +99,7 @@ def update_location_cache(self):
self.location_cache.update_location_cache()

def refresh_endpoint_list(self, database_account, **kwargs):
if self.location_cache.current_time_millis() - self.last_refresh_time > self.refresh_time_interval_in_ms:
if current_time_millis() - self.last_refresh_time > self.refresh_time_interval_in_ms:
self.refresh_needed = True
if self.refresh_needed:
with self.refresh_lock:
Expand All @@ -114,11 +115,11 @@ def _refresh_endpoint_list_private(self, database_account=None, **kwargs):
if database_account:
self.location_cache.perform_on_database_account_read(database_account)
self.refresh_needed = False
self.last_refresh_time = self.location_cache.current_time_millis()
self.last_refresh_time = current_time_millis()
else:
if self.location_cache.should_refresh_endpoints() or self.refresh_needed:
self.refresh_needed = False
self.last_refresh_time = self.location_cache.current_time_millis()
self.last_refresh_time = current_time_millis()
# this will perform getDatabaseAccount calls to check endpoint health
self._endpoints_health_check(**kwargs)

Expand Down
Loading
Loading