Skip to content

check _request_information_cache value is a dict before .get-ing from it #3663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 29, 2025

Conversation

pacrob
Copy link
Contributor

@pacrob pacrob commented Apr 14, 2025

What was wrong?

Mixing batching and non-batching calls causing issues.

Closes #3642

How was it fixed?

  • Check that the value from cache is a dict before getting from it.
  • Add try/finally blocks (via a decorator) to all make_batch_request functions to make sure _is_batching is True when called and reset to False afterwards.

Todo:

  • Clean up commit history
  • Add or update documentation related to these changes
  • Add entry to the release notes

Cute Animal Picture

image

@pacrob pacrob requested review from kclowes, fselmo, Copilot and reedsa and removed request for kclowes April 14, 2025 22:17
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 2 out of 3 changed files in this pull request and generated no comments.

Files not reviewed (1)
  • newsfragments/3642.bugfix.rst: Language not supported
Comments suppressed due to low confidence (1)

web3/providers/persistent/persistent.py:336

  • The indentation of the if-statement checking for 'error' is inconsistent with the surrounding block, which may cause unexpected behavior. Consider aligning it properly within the 'if isinstance(response, dict):' block.
                      if "error" in response and request is None:

@pacrob pacrob self-assigned this Apr 14, 2025
@pacrob
Copy link
Contributor Author

pacrob commented Apr 15, 2025

@fselmo I've added the try/finally as suggested. Good as is, or is there a reason to wrap just the sending of the request itself?

I applied it across all instances of make_batch_request for consistency. Is there a reason to only apply it to the persistent provider?

Copy link
Collaborator

@fselmo fselmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I wanted to suggest the decorator as I think that could DRY things up and reduce common surface area to one place.

Is there a reason to only apply it to the persistent provider?

This should be applied to all providers that support batching (as you did) 👍🏼

@pacrob pacrob force-pushed the bugfix-3642 branch 3 times, most recently from 8fc338b to 8590ae8 Compare April 22, 2025 20:44
@pacrob pacrob requested a review from fselmo April 22, 2025 20:55
Copy link
Collaborator

@fselmo fselmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good, just some refactor ideas to think about. It would also be good to have a simple test that actually checks this state. Given that we also do the dict check (the first commit in this PR), I don't think any tests actually fail without this implementation and then pass after it is implemented (correct me if I'm wrong here).

It doesn't have to be a super complex test but something that tracks this change in case we ever meddle with it and accidentally turn it off would be ideal.

@pacrob pacrob force-pushed the bugfix-3642 branch 2 times, most recently from 74f3730 to bcd1f8b Compare April 28, 2025 22:50
@pacrob
Copy link
Contributor Author

pacrob commented Apr 28, 2025

something that tracks this change

Tests added for all providers (except legacy websocket), and verified a failure when the decorator is commented out.

I decided to not test legacy websocket because even when I got its test to pass, I couldn't get rid of the unclosed event loop warnings, plus its being due to be removed sooner than later. Let me know if you'd like me to take another go at it.



@pytest.mark.parametrize("initial_is_batching", [True, False])
def test_http_provider_is_batching_is_reset_to_false_after_batching_call(
Copy link
Collaborator

@fselmo fselmo Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of a test that checks both on and off behavior? Something like this:

@patch(
    "web3._utils.http_session_manager.HTTPSessionManager.make_post_request",
    new_callable=Mock,
)
def test_http_provider_is_batching_when_make_batch_request(mock_post):
    def assert_is_batching_and_return_response(*_args, **_kwargs) -> bytes:
        assert provider._is_batching
        return b'{"jsonrpc":"2.0","id":1,"result":["0x1"]}'

    provider = HTTPProvider()
    assert not provider._is_batching

    mock_post.side_effect = assert_is_batching_and_return_response

    provider.make_batch_request([("eth_blockNumber", [])])
    assert not provider._is_batching

That way we check that it turns on while we are making the post request and that it turns back off after the call. We could add a similar test across all the providers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, all updated to the new pattern. Thanks!

@fselmo
Copy link
Collaborator

fselmo commented Apr 29, 2025

lgtm once tests pass 👌🏼 (all good on the legacy ws, I agree on the deprecation reasoning)

@pacrob pacrob merged commit ee602a3 into ethereum:main Apr 29, 2025
85 checks passed
@pacrob pacrob deleted the bugfix-3642 branch April 29, 2025 17:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7.9.0: Can't make batch requests without provider._is_batching
3 participants