-
Notifications
You must be signed in to change notification settings - Fork 171
run_on_all_clients_push_missing_configs based on Pr-11864 #12007
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master-converged-mode
Are you sure you want to change the base?
run_on_all_clients_push_missing_configs based on Pr-11864 #12007
Conversation
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: Daniel Osypenko <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: DanielOsypenko The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
execution one cluster (shortened version of test, but we see it runs on single cluster as previously was running on internal mode): execution on Provider with multiple clusters: |
kubeconf_path = [ | ||
path for path in kubeconf_paths if cluster_name in path | ||
][0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand it correctly, this expects that the cluster name is used in the path for the kubeconfig, which we can probably assume, but there might be a problem with a situation when one cluster name will be a substring of another cluster name, right? For example cluster-a
and cluster-aa
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point.
We can try to read kubeconfig.clusters.name
Kubeconfig has list of clusters and one of them have matching name, others are dedicated for routes
hosted_clients_obj.download_hosted_clusters_kubeconfig_files( | ||
{cluster_name: cluster_path} | ||
) | ||
) | ||
if not len(kubeconf_paths): | ||
from ocs_ci.framework.exceptions import ClusterKubeconfigNotFoundError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be this import done in the beginning of the file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in #12041
if push_missing_configs: | ||
from ocs_ci.deployment.hosted_cluster import hypershift_cluster_factory | ||
|
||
hypershift_cluster_factory( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably unrelated to this PR, but the concept of ODF client clusters is not tightly coupled to HCP hosted cluster. ODF client can be installed on non HCP hosted OCP cluster
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This concept was started few years back, I did not think about agent and external cluster having a role as client. Purpose then, when it was new was to make object entities of Cluster that aggregates capabilities of Hypershift OCP cluster, deployment helper functions and ODF client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have moved from this draft to #12041, it is updated and verified
hypershift_cluster_factory( | ||
duty="use_existing_hosted_clusters_push_missing_configs", | ||
) | ||
client_indexes = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this function specific to client clusters or to multi clusters in general?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setup_multicluster_marker is a helper function specific for multicluster scenarios, it does not use significant time to execute, but we rely on it. It makes possible during the execution to reduce number of client clusters in system from many to 0 or increase from 0 to many every time when we use marker run_on_all_clients
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have moved from this draft to #12041, it is updated and verified
_switch_context_helper(request) | ||
|
||
|
||
def _switch_context_helper(request): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is it defined as a private function? Can't it be used outside this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code was extracted from the pytest_fixture_setup to reuse in finalizers.
This function was to use it in module only in fixture functions and helpers where request arg is visible. Can't see now where it can be reused outside of the module.
But this can change in future, we can set it public when use case will appear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have moved from this draft to #12041, it is updated and verified
Based on solution designed by @fbalak
Adding marker "@run_on_all_clients_push_missing_configs" make:
execution one cluster (shortened version of test, but we see it runs on single cluster as previously was running on internal mode):
single_cluster_ruwith_mark.txt
execution on Provider with multiple clusters:
run-ci: --cluster-path ${CLUSTER_PATH} --ocp-version 4.18 --ocs-version 4.18 --cluster-name ${CLUSTER_NAME} tests/functional/pv/pv_services/test_rwop_pvc.py::TestRwopPvc::test_pod_with_same_priority -m tier1 -k "" --dev-mode --ocsci-conf /Users/danielosypenko/Work/automation_4/ocs-ci/conf/deployment/fusion_hci_pc/provider_bm_upi_1az_rhcos_nvme_3m_3w.yaml
console_log_info.txt