Skip to content

run_on_all_clients_push_missing_configs based on Pr-11864 #12007

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 20 commits into
base: master-converged-mode
Choose a base branch
from

Conversation

DanielOsypenko
Copy link
Contributor

@DanielOsypenko DanielOsypenko commented Apr 27, 2025

Based on solution designed by @fbalak

Adding marker "@run_on_all_clients_push_missing_configs" make:

  1. Run test on single cluster without spending additional time, performing and reporting same way
  2. Run test on multiple storage client cluster, gathering configurations and pushing them to MultiClusterConfig. Running only on clients

execution one cluster (shortened version of test, but we see it runs on single cluster as previously was running on internal mode):
single_cluster_ruwith_mark.txt

execution on Provider with multiple clusters:
run-ci: --cluster-path ${CLUSTER_PATH} --ocp-version 4.18 --ocs-version 4.18 --cluster-name ${CLUSTER_NAME} tests/functional/pv/pv_services/test_rwop_pvc.py::TestRwopPvc::test_pod_with_same_priority -m tier1 -k "" --dev-mode --ocsci-conf /Users/danielosypenko/Work/automation_4/ocs-ci/conf/deployment/fusion_hci_pc/provider_bm_upi_1az_rhcos_nvme_3m_3w.yaml

console_log_info.txt

fbalak and others added 20 commits April 25, 2025 18:13
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
Signed-off-by: fbalak <[email protected]>
@DanielOsypenko DanielOsypenko added the provider-client Provider-client solution label Apr 27, 2025
@pull-request-size pull-request-size bot added the size/L PR that changes 100-499 lines label Apr 27, 2025
Copy link

openshift-ci bot commented Apr 27, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: DanielOsypenko

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@DanielOsypenko
Copy link
Contributor Author

execution one cluster (shortened version of test, but we see it runs on single cluster as previously was running on internal mode):
single_cluster_ruwith_mark.txt

execution on Provider with multiple clusters:
run-ci: --cluster-path ${CLUSTER_PATH} --ocp-version 4.18 --ocs-version 4.18 --cluster-name ${CLUSTER_NAME} tests/functional/pv/pv_services/test_rwop_pvc.py::TestRwopPvc::test_pod_with_same_priority -m tier1 -k "" --dev-mode --ocsci-conf /Users/danielosypenko/Work/automation_4/ocs-ci/conf/deployment/fusion_hci_pc/provider_bm_upi_1az_rhcos_nvme_3m_3w.yaml

console_log_info.txt

@DanielOsypenko DanielOsypenko changed the title Pr 11864 run_on_all_clients_push_missing_configs based on Pr-11864 Apr 27, 2025
Comment on lines +1842 to +1844
kubeconf_path = [
path for path in kubeconf_paths if cluster_name in path
][0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand it correctly, this expects that the cluster name is used in the path for the kubeconfig, which we can probably assume, but there might be a problem with a situation when one cluster name will be a substring of another cluster name, right? For example cluster-a and cluster-aa.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point.
We can try to read kubeconfig.clusters.name
Kubeconfig has list of clusters and one of them have matching name, others are dedicated for routes

hosted_clients_obj.download_hosted_clusters_kubeconfig_files(
{cluster_name: cluster_path}
)
)
if not len(kubeconf_paths):
from ocs_ci.framework.exceptions import ClusterKubeconfigNotFoundError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be this import done in the beginning of the file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in #12041

if push_missing_configs:
from ocs_ci.deployment.hosted_cluster import hypershift_cluster_factory

hypershift_cluster_factory(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably unrelated to this PR, but the concept of ODF client clusters is not tightly coupled to HCP hosted cluster. ODF client can be installed on non HCP hosted OCP cluster

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This concept was started few years back, I did not think about agent and external cluster having a role as client. Purpose then, when it was new was to make object entities of Cluster that aggregates capabilities of Hypershift OCP cluster, deployment helper functions and ODF client

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have moved from this draft to #12041, it is updated and verified

hypershift_cluster_factory(
duty="use_existing_hosted_clusters_push_missing_configs",
)
client_indexes = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this function specific to client clusters or to multi clusters in general?

Copy link
Contributor Author

@DanielOsypenko DanielOsypenko May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setup_multicluster_marker is a helper function specific for multicluster scenarios, it does not use significant time to execute, but we rely on it. It makes possible during the execution to reduce number of client clusters in system from many to 0 or increase from 0 to many every time when we use marker run_on_all_clients

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have moved from this draft to #12041, it is updated and verified

_switch_context_helper(request)


def _switch_context_helper(request):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is it defined as a private function? Can't it be used outside this file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code was extracted from the pytest_fixture_setup to reuse in finalizers.
This function was to use it in module only in fixture functions and helpers where request arg is visible. Can't see now where it can be reused outside of the module.
But this can change in future, we can set it public when use case will appear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have moved from this draft to #12041, it is updated and verified

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/work-in-progress provider-client Provider-client solution size/L PR that changes 100-499 lines
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants