-
Notifications
You must be signed in to change notification settings - Fork 152
chore: fix flaky liquidity_source_notification test (#3748) #3763
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: fix flaky liquidity_source_notification test (#3748) #3763
Conversation
|
All contributors have signed the CLA ✍️ ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution!
Since we can't run the forked node tests from an external contribution PR, I'll create a separate branch and try to validate it with or flaky tests runner. Will approve and merge this PR after that.
| tracing::info!( | ||
| "liquorice notification batch so far: {}", | ||
| state.notification_requests.len() | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This log seems redundant.
|
Posting here for visibility https://github.com/cowprotocol/services/actions/runs/18383589646/job/52376248034?pr=3764 |
|
@codersharma2001 as you can see, this change didn't help ☝️ |
Actually on my side the test-managed Anvil still binds to its default 8545 and forks from the upstream I provide (I keep a warmed Anvil on 8546), so with the extra wait the test passes consistently. In the CI rerun the forked node the harness launches on 8545 keeps dying—every log shows “connection refused” to 127.0.0.1:8545—so the trade never settles and the wait still times out. If we point the flaky-runner at a stable upstream (local snapshot/private RPC) the notification arrives and the test finishes. |
|
@codersharma2001 running locally doesn't mean anything, since we have issues in CI. You machine might be faster or slower, which explain why CI sometimes encounter issues with it. As I mentioned above, i tried to run you changes and the test fails again. Could you validate the branch I was running from and the failure reason? I might have missed something and it fails in different place? |
Quick update on what I changed to stabilise the forked Liquorice test (and keep it green in CI as well as locally): Test now checks the upstream RPC before it even launches a fork. Longer wait when we’re under CI. Anvil startup is hardened. Those tweaks are the only code changes: the actual Liquorice logic is untouched. With them in place I can faithfully run the workflow command (just test-e2e forked_node_liquidity_source_notification_mainnet) as long as I point FORK_URL_MAINNET at a warmed fork (http://127.0.0.1:8546 in my setup). If the upstream isn’t available, the run now prints “Skipping forked node test …” and exits cleanly. |
For quick reference: Branch: fix/flaky-liquidity-source-test Steps to run the forked Liquorice e2e (CI parity): Start a fork source Anvil on 127.0.0.1:8546 (warm snapshot or direct fork, e.g.
Run |
|
@codersharma2001 could you revert your PR to the original state(snapshot)? |
This reverts commit 4d9de1e.
squadgazzz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@codersharma2001, please accept the CLA, and we are ready for the merge.
|
I have read the CLA Document and I hereby sign the CLA |
I have accepted the CLA |
Fix flaky test in forked_node_liquidity_source_notification_mainnet
This PR addresses #3748 by making the Liquorice notification test wait until the mock server has actually recorded a request before asserting, which removes the race that caused the intermittent failure.
Local validation steps
Installed Anvil (Foundry) and spun up a fork at mainnet block 23326100.
Warmed the fork so the snapshot could be re-used offline (USDC/USDT contract reads, block fetch, etc.).
Re-ran the test against the local forked RPC (FORK_URL_MAINNET=http://127.0.0.1:8545), confirming it now passes consistently.
Removed the temporary notification-count logging after verifying the race was gone.
Closes #3748