Skip to content

check _request_information_cache value is a dict before .get-ing from it #3663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

pacrob
Copy link
Contributor

@pacrob pacrob commented Apr 14, 2025

What was wrong?

Mixing batching and non-batching calls causing issues.

Closes #3642

How was it fixed?

Check that the value from cache is a dict before getting from it.

Todo:

  • Clean up commit history
  • Add or update documentation related to these changes
  • Add entry to the release notes

Cute Animal Picture

image

@pacrob pacrob requested review from kclowes, fselmo, Copilot and reedsa and removed request for kclowes April 14, 2025 22:17
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 2 out of 3 changed files in this pull request and generated no comments.

Files not reviewed (1)
  • newsfragments/3642.bugfix.rst: Language not supported
Comments suppressed due to low confidence (1)

web3/providers/persistent/persistent.py:336

  • The indentation of the if-statement checking for 'error' is inconsistent with the surrounding block, which may cause unexpected behavior. Consider aligning it properly within the 'if isinstance(response, dict):' block.
                      if "error" in response and request is None:

@pacrob pacrob self-assigned this Apr 14, 2025
@@ -318,17 +318,16 @@ def _raise_stray_errors_from_cache(self) -> None:
for (
response
) in self._request_processor._request_response_cache._data.values():
request = (
self._request_processor._request_information_cache.get_cache_entry(
if isinstance(response, dict):
Copy link
Collaborator

@fselmo fselmo Apr 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking again, I think this check is fine to keep. But ultimately, the issue seems to be that the provider's make_batch_request (or async_make_batch_request) is not setting the _is_batching state. This state is only being set within the RequestBatcher.

Since those batch methods are public methods, and we should make sure we are within the batching state when we make those calls, we should ensure that before making the request, the state is set to _is_batching=True, and that, whether or not the call succeeds, we reset the state to False.

So I think a try / finally around the actual request seems appropriate.

Does that make sense? That would ultimately have resolved this issue and we shouldn't need to check that the response is a dict if we are not in a batching state (which we already had the check for in this code block).

@pacrob
Copy link
Contributor Author

pacrob commented Apr 15, 2025

@fselmo I've added the try/finally as suggested. Good as is, or is there a reason to wrap just the sending of the request itself?

I applied it across all instances of make_batch_request for consistency. Is there a reason to only apply it to the persistent provider?

Copy link
Collaborator

@fselmo fselmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I wanted to suggest the decorator as I think that could DRY things up and reduce common surface area to one place.

Is there a reason to only apply it to the persistent provider?

This should be applied to all providers that support batching (as you did) 👍🏼

request_data = self.encode_batch_rpc_request(requests)
response = cast(List[RPCResponse], self._make_request(request_data))
return sort_batch_response_by_response_ids(response)
self._is_batching = True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about creating a decorator that we can use on these methods that does the try / finally so we don't have to repeat this code all around? Just a thought, something like @batching_context / @async_batching_context?



@pytest.mark.asyncio
async def test_raise_stray_errors_from_cache_handles_list_response():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this testing that we don't raise errors from the cache if it's in a list form (batch request)? I think that's the case right? If so, we should clarify in the test name.

And we can remove the try / except because that would just raise an error naturally. Maybe have a comment saying

# assert no errors raised
provider._raise_stray_errors_from_cache()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7.9.0: Can't make batch requests without provider._is_batching
2 participants