On this 4.21 run- https://url.corp.redhat.com/9200662
We saw that a few tests do check health check but only on one of the managed clusters and context is not switched to ensure that ceph is healthy on both the managed clusters.
tests/functional/disaster-recovery/regional-dr/test_cg_configuration.py::TestCGConfiguration::test_drpolicy_grouping
2026-04-09 15:09:59 05:39:59 - MainThread - tests.conftest - INFO - C[ashakya-b] - Checking for Ceph Health OK
This check was done for cluster ashakya-b but couldn't find it for the other managed cluster which is part of peer DR relationship.
Similarly for tests/functional/disaster-recovery/regional-dr/test_failover.py::TestFailover::test_failover[primary_up-rbd-cli]
This check was done here
2026-04-09 15:10:39 05:40:39 - MainThread - tests.conftest - INFO - C[ashakya-c] - Checking for Ceph Health OK
but again, this check wasn't done for the other cluster
May be it's a consistent issue with all other tests which needs fixing.
On this 4.21 run- https://url.corp.redhat.com/9200662
We saw that a few tests do check health check but only on one of the managed clusters and context is not switched to ensure that ceph is healthy on both the managed clusters.
tests/functional/disaster-recovery/regional-dr/test_cg_configuration.py::TestCGConfiguration::test_drpolicy_grouping
This check was done for cluster ashakya-b but couldn't find it for the other managed cluster which is part of peer DR relationship.
Similarly for tests/functional/disaster-recovery/regional-dr/test_failover.py::TestFailover::test_failover[primary_up-rbd-cli]
This check was done here
but again, this check wasn't done for the other cluster
May be it's a consistent issue with all other tests which needs fixing.