-
Notifications
You must be signed in to change notification settings - Fork 4
Updated timeouts and moved test Fleet-51 to special #422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Running CI status:
|
|
Second Try with All tests (using all tags)
|
| // Verify is gitrepoJobsCleanup is enabled by default. | ||
| cy.accesMenuSelection('local', 'Workloads', 'CronJobs'); | ||
| // Adding wait for CronJobs page to load correctly. | ||
| cy.wait(5000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5 seconds is needed?
Doesn't it display with 2 more only?
Same for the examples below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried varying seconds as low as I can but I find optimal 5 seconds to load page correctly. But still there is room for better logic as cy.wait() is not final solution.
|
I get test 51 is being moved, but it is fixed? |
Yes, you can see the 2 retries of 2.12-head CI run proves it. Also, I mentioned the whole test execution and where it is failing. |
The test passes when tried solely the special_tests sets. We have had in the past this test passing when the p1_2 was run alone. That is not proof itself. Full runs are more accurate as they carry more processes not being settled while executing the full run, but still this is not explanation of what was wrong. Feel free to merge and we will see, but I am not sure I undertand what caused it. It is ok if we don't and we try simply not to have other tests affected by encapsulating it somewhere else. But my question remains: is it fixed? and if so, what caused it? |
Please let me know if you require any other details. Happy to explain 😄 |
Feel free to merge it. |
These might be the several causes I can think of, which can be minimized by increasing timeouts. Sometimes cluster may be up and running within timeout as well. |




In recent days, we have seen that test Fleet-51 is failing multiple times on CI.
https://github.com/rancher/fleet-e2e/actions/runs/19657675609/job/56297791570#step:10:565
Screenshot showing actual test failures
Problem:
What should happen:
What is happening:
Currently Fleet-51 test:
Currently Fleet-156 is also failing due to
Solution: