-
Notifications
You must be signed in to change notification settings - Fork 1.8k
check _request_information_cache value is a dict before .get-ing from it #3663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 3 changed files in this pull request and generated no comments.
Files not reviewed (1)
- newsfragments/3642.bugfix.rst: Language not supported
Comments suppressed due to low confidence (1)
web3/providers/persistent/persistent.py:336
- The indentation of the if-statement checking for 'error' is inconsistent with the surrounding block, which may cause unexpected behavior. Consider aligning it properly within the 'if isinstance(response, dict):' block.
if "error" in response and request is None:
@@ -318,17 +318,16 @@ def _raise_stray_errors_from_cache(self) -> None: | |||
for ( | |||
response | |||
) in self._request_processor._request_response_cache._data.values(): | |||
request = ( | |||
self._request_processor._request_information_cache.get_cache_entry( | |||
if isinstance(response, dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking again, I think this check is fine to keep. But ultimately, the issue seems to be that the provider's make_batch_request
(or async_make_batch_request
) is not setting the _is_batching
state. This state is only being set within the RequestBatcher.
Since those batch methods are public methods, and we should make sure we are within the batching state when we make those calls, we should ensure that before making the request, the state is set to _is_batching=True
, and that, whether or not the call succeeds, we reset the state to False
.
So I think a try / finally
around the actual request seems appropriate.
Does that make sense? That would ultimately have resolved this issue and we shouldn't need to check that the response is a dict if we are not in a batching state (which we already had the check for in this code block).
@fselmo I've added the I applied it across all instances of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I wanted to suggest the decorator as I think that could DRY things up and reduce common surface area to one place.
Is there a reason to only apply it to the persistent provider?
This should be applied to all providers that support batching (as you did) 👍🏼
request_data = self.encode_batch_rpc_request(requests) | ||
response = cast(List[RPCResponse], self._make_request(request_data)) | ||
return sort_batch_response_by_response_ids(response) | ||
self._is_batching = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about creating a decorator that we can use on these methods that does the try / finally
so we don't have to repeat this code all around? Just a thought, something like @batching_context
/ @async_batching_context
?
|
||
|
||
@pytest.mark.asyncio | ||
async def test_raise_stray_errors_from_cache_handles_list_response(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this testing that we don't raise errors from the cache if it's in a list form (batch request)? I think that's the case right? If so, we should clarify in the test name.
And we can remove the try / except because that would just raise an error naturally. Maybe have a comment saying
# assert no errors raised
provider._raise_stray_errors_from_cache()
What was wrong?
Mixing batching and non-batching calls causing issues.
Closes #3642
How was it fixed?
Check that the value from cache is a
dict
beforeget
ting from it.Todo:
Cute Animal Picture