Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 E2E: Install independent Metal3 IPAM after upgrade #2382

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

lentzi90
Copy link
Member

@lentzi90 lentzi90 commented Mar 7, 2025

What this PR does / why we need it:

The Metal3 IPAM was previously bundled with CAPM3. Now we deploy it
separately as a CAPI IPAM provider. In clusterctl upgrade tests going
from a version where IPAM is bundled, to a version where it is not, we
must install it after the upgrade.
This commit adds a post upgrade hook to the clusterctl upgrade tests
that installs the Metal3 IPAM.

NOTE: This also includes #2380 we can merge that first or we can close it. I just wanted to make sure I don't hit that issue when testing the upgrade.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

@metal3-io-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from lentzi90. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@metal3-io-bot metal3-io-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Mar 7, 2025
@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from f25f534 to ef7f5cc Compare March 7, 2025 13:30
@lentzi90
Copy link
Member Author

lentzi90 commented Mar 7, 2025

/test metal3-e2e-clusterctl-upgrade-test-main

@peppi-lotta
Copy link
Member

peppi-lotta commented Mar 10, 2025

You are using metal3 as provider name but currently it's actually metal3ipam and this is why the test is not passing probably.

I created a PR to change it to Metal3 because that is a better and more correct name.
PR is here: metal3-io/metal3-dev-env#1509

@adilGhaffarDev
Copy link
Member

/test metal3-e2e-clusterctl-upgrade-test-main

@adilGhaffarDev
Copy link
Member

let's change the provider name here too:

Metal3ipamProviderName = "metal3ipam"

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from ef7f5cc to 03dd3e7 Compare March 10, 2025 10:59
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

2 similar comments
@tuminoid
Copy link
Member

/test metal3-e2e-clusterctl-upgrade-test-main

@tuminoid
Copy link
Member

/test metal3-e2e-clusterctl-upgrade-test-main

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 03dd3e7 to 3612a78 Compare March 12, 2025 13:52
@metal3-io-bot metal3-io-bot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 12, 2025
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

@tuminoid
Copy link
Member

It is still complaining about target provider components for IPAM, even with the new name. I don't see open PRs anymore in any repo related to this, besides this one. What are we missing?

@lentzi90
Copy link
Member Author

There is something wrong with the clusterctl-config. It doesn't have the Metal3 IPAM provider listed. 🤔

@tuminoid
Copy link
Member

There is something wrong with the clusterctl-config. It doesn't have the Metal3 IPAM provider listed. 🤔

We added it here and adjusted that here ... hmm.

@adilGhaffarDev
Copy link
Member

There is something wrong with the clusterctl-config. It doesn't have the Metal3 IPAM provider listed. 🤔

We can move the following code to ci-e2e.sh

https://github.com/metal3-io/metal3-dev-env/blob/d831cba0825498a685d3b116183ee79dad00e358/03_launch_mgmt_cluster.sh#L499C1-L505C4

and try again. But I am not sure why dev-env is not setting it.
It should be fine to move it there for the main branch.

@lentzi90
Copy link
Member Author

Dev-env is setting it but possibly not in the place that is used by this test. There could also be something with the overrides structure that need to be adjusted.

@lentzi90
Copy link
Member Author

This is from earlier in the test. Does it look correct? I'm getting very confused with all the overrides and provider=... 😵

 + clusterctl init --core cluster-api:v1.9.5 --bootstrap kubeadm:v1.9.5 --control-plane kubeadm:v1.9.5 --infrastructure=metal3:v1.10.99 -v5 --ipam=metal3:v1.10.99
 Using configuration file="/home/metal3ci/.config/cluster-api/clusterctl.yaml"
 Installing the clusterctl inventory CRD
 Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
 Fetching providers
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/cluster-api/v1.9.5/core-components.yaml" provider="cluster-api" version="v1.9.5"
 Fetching file="core-components.yaml" provider="cluster-api" type="CoreProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/bootstrap-kubeadm/v1.9.5/bootstrap-components.yaml" provider="bootstrap-kubeadm" version="v1.9.5"
 Fetching file="bootstrap-components.yaml" provider="kubeadm" type="BootstrapProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/control-plane-kubeadm/v1.9.5/control-plane-components.yaml" provider="control-plane-kubeadm" version="v1.9.5"
 Fetching file="control-plane-components.yaml" provider="kubeadm" type="ControlPlaneProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/infrastructure-metal3/v1.10.99/infrastructure-components.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Using override="infrastructure-components.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/ipam-metal3/v1.10.99/ipam-components.yaml" provider="ipam-metal3" version="v1.10.99"
 Using override="ipam-components.yaml" provider="ipam-metal3" version="v1.10.99"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/cluster-api/v1.9.5/metadata.yaml" provider="cluster-api" version="v1.9.5"
 Fetching file="metadata.yaml" provider="cluster-api" type="CoreProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/bootstrap-kubeadm/v1.9.5/metadata.yaml" provider="bootstrap-kubeadm" version="v1.9.5"
 Fetching file="metadata.yaml" provider="kubeadm" type="BootstrapProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/control-plane-kubeadm/v1.9.5/metadata.yaml" provider="control-plane-kubeadm" version="v1.9.5"
 Fetching file="metadata.yaml" provider="kubeadm" type="ControlPlaneProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/infrastructure-metal3/v1.10.99/metadata.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Using override="metadata.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/ipam-metal3/v1.10.99/metadata.yaml" provider="ipam-metal3" version="v1.10.99"
 Using override="metadata.yaml" provider="ipam-metal3" version="v1.10.99"

@adilGhaffarDev
Copy link
Member

This is from earlier in the test. Does it look correct? I'm getting very confused with all the overrides and provider=... 😵

 + clusterctl init --core cluster-api:v1.9.5 --bootstrap kubeadm:v1.9.5 --control-plane kubeadm:v1.9.5 --infrastructure=metal3:v1.10.99 -v5 --ipam=metal3:v1.10.99
 Using configuration file="/home/metal3ci/.config/cluster-api/clusterctl.yaml"
 Installing the clusterctl inventory CRD
 Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
 Fetching providers
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/cluster-api/v1.9.5/core-components.yaml" provider="cluster-api" version="v1.9.5"
 Fetching file="core-components.yaml" provider="cluster-api" type="CoreProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/bootstrap-kubeadm/v1.9.5/bootstrap-components.yaml" provider="bootstrap-kubeadm" version="v1.9.5"
 Fetching file="bootstrap-components.yaml" provider="kubeadm" type="BootstrapProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/control-plane-kubeadm/v1.9.5/control-plane-components.yaml" provider="control-plane-kubeadm" version="v1.9.5"
 Fetching file="control-plane-components.yaml" provider="kubeadm" type="ControlPlaneProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/infrastructure-metal3/v1.10.99/infrastructure-components.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Using override="infrastructure-components.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/ipam-metal3/v1.10.99/ipam-components.yaml" provider="ipam-metal3" version="v1.10.99"
 Using override="ipam-components.yaml" provider="ipam-metal3" version="v1.10.99"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/cluster-api/v1.9.5/metadata.yaml" provider="cluster-api" version="v1.9.5"
 Fetching file="metadata.yaml" provider="cluster-api" type="CoreProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/bootstrap-kubeadm/v1.9.5/metadata.yaml" provider="bootstrap-kubeadm" version="v1.9.5"
 Fetching file="metadata.yaml" provider="kubeadm" type="BootstrapProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/control-plane-kubeadm/v1.9.5/metadata.yaml" provider="control-plane-kubeadm" version="v1.9.5"
 Fetching file="metadata.yaml" provider="kubeadm" type="ControlPlaneProvider" version="v1.9.5"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/infrastructure-metal3/v1.10.99/metadata.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Using override="metadata.yaml" provider="infrastructure-metal3" version="v1.10.99"
 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/ipam-metal3/v1.10.99/metadata.yaml" provider="ipam-metal3" version="v1.10.99"
 Using override="metadata.yaml" provider="ipam-metal3" version="v1.10.99"

this seems fine to me, here --ipam=metal3 is working fine. This is coming from dev-env, which is launching a management cluster. What seems wrong here?

@lentzi90
Copy link
Member Author

It is probably correct. I'm just getting confused by at all.
I found another thing to check now... do we need to add it in the e2e config? We have config for all the other things there (core, control-plane, bootstrap and infra).

@adilGhaffarDev
Copy link
Member

I found another thing to check now... do we need to add it in the e2e config? We have config for all the other things there (core, control-plane, bootstrap and infra).

yes we need to add there, I forgot about that. For e2e it gets it from there.

@lentzi90 lentzi90 closed this Mar 13, 2025
@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 3612a78 to 5dd8f6e Compare March 13, 2025 08:55
@metal3-io-bot metal3-io-bot removed the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Mar 13, 2025
@metal3-io-bot metal3-io-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Mar 13, 2025
@lentzi90 lentzi90 reopened this Mar 13, 2025
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

@lentzi90
Copy link
Member Author

[2025-03-13T09:17:51.985Z]   [FAILED] Failed to resolve release markers in e2e test config file

[2025-03-13T09:17:51.985Z]   Expected success, but got an error:

[2025-03-13T09:17:51.985Z]       <*errors.withStack | 0xc000b80cc0>: 

[2025-03-13T09:17:51.985Z]       failed resolving release url "{go://github.com/metal3-io/[email protected]}": failed to find releases tagged with a valid semantic version number

[2025-03-13T09:17:51.985Z]       {

[2025-03-13T09:17:51.985Z]           error: <*errors.withMessage | 0xc000aafee0>{

[2025-03-13T09:17:51.985Z]               cause: <*errors.fundamental | 0xc000b80c90>{

[2025-03-13T09:17:51.985Z]                   msg: "failed to find releases tagged with a valid semantic version number",

[2025-03-13T09:17:51.985Z]                   stack: [0x3720925, 0x371ff1e, 0x371fc4b, 0x371f519, 0x39b3a75, 0x397d29c, 0x1735346, 0x1734459, 0x1b8568e, 0x1b959ee, 0x1b9921b, 0x16b1101],

[2025-03-13T09:17:51.985Z]               },

[2025-03-13T09:17:51.985Z]               msg: "failed resolving release url \"{go://github.com/metal3-io/[email protected]}\"",

[2025-03-13T09:17:51.985Z]           },

[2025-03-13T09:17:51.985Z]           stack: [0x371fdcb, 0x371f519, 0x39b3a75, 0x397d29c, 0x1735346, 0x1734459, 0x1b8568e, 0x1b959ee, 0x1b9921b, 0x16b1101],

[2025-03-13T09:17:51.985Z]       }

I don't get it. We have proper tags there. What is the issue? 🤔

@tuminoid
Copy link
Member

[2025-03-13T09:17:51.985Z]   [FAILED] Failed to resolve release markers in e2e test config file

[2025-03-13T09:17:51.985Z]   Expected success, but got an error:

[2025-03-13T09:17:51.985Z]       <*errors.withStack | 0xc000b80cc0>: 

[2025-03-13T09:17:51.985Z]       failed resolving release url "{go://github.com/metal3-io/[email protected]}": failed to find releases tagged with a valid semantic version number

[2025-03-13T09:17:51.985Z]       {

[2025-03-13T09:17:51.985Z]           error: <*errors.withMessage | 0xc000aafee0>{

[2025-03-13T09:17:51.985Z]               cause: <*errors.fundamental | 0xc000b80c90>{

[2025-03-13T09:17:51.985Z]                   msg: "failed to find releases tagged with a valid semantic version number",

[2025-03-13T09:17:51.985Z]                   stack: [0x3720925, 0x371ff1e, 0x371fc4b, 0x371f519, 0x39b3a75, 0x397d29c, 0x1735346, 0x1734459, 0x1b8568e, 0x1b959ee, 0x1b9921b, 0x16b1101],

[2025-03-13T09:17:51.985Z]               },

[2025-03-13T09:17:51.985Z]               msg: "failed resolving release url \"{go://github.com/metal3-io/[email protected]}\"",

[2025-03-13T09:17:51.985Z]           },

[2025-03-13T09:17:51.985Z]           stack: [0x371fdcb, 0x371f519, 0x39b3a75, 0x397d29c, 0x1735346, 0x1734459, 0x1b8568e, 0x1b959ee, 0x1b9921b, 0x16b1101],

[2025-03-13T09:17:51.985Z]       }

I don't get it. We have proper tags there. What is the issue? 🤔

It is looking for exactly v1.9 which we don't have? We have v1.9.0

@lentzi90
Copy link
Member Author

lentzi90 commented Mar 13, 2025

I don't think so. We use this same syntax everywhere to get the latest patch version for that minor version.
Edit: With "everywhere" I mean in the e2e config

- name: metal3
type: IPAMProvider
versions:
- name: "{go://github.com/metal3-io/[email protected]}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, IPAM is slightly different from other providers because we want to test the main branch rather than version 1.9.

However, I'm wondering how that would work. We can use v1.10.99 instead of {go://github.com/metal3-io/[email protected]}, but how will we obtain the YAML resources for the main branch? Should we push them somewhere during our image-building process? As far as I know, we only push images to quay.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh we have them in the overrides folder:

 Potential override file searchFile="/home/metal3ci/.config/cluster-api/overrides/ipam-metal3/v1.10.99/metadata.yaml" provider="ipam-metal3" version="v1.10.99"
 Using override="metadata.yaml" provider="ipam-metal3" version="v1.10.99"

We used them in dev-env, we can use these ones.

ClusterctlConfigPath: clusterctlConfigPath,
KubeconfigPath: clusterProxy.GetKubeconfigPath(),
LogFolder: ipamDeployLogFolder,
IPAMProviders: []string{Metal3ipamProviderName},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
IPAMProviders: []string{Metal3ipamProviderName},
IPAMProviders: []string{fmt.Sprintf(Metal3ipamProviderName, ipamRelease)},

We need to give the version here.

@adilGhaffarDev
Copy link
Member

I don't get it. We have proper tags there. What is the issue? 🤔

[2025-03-13T09:17:51.985Z]   [FAILED] Failed to resolve release markers in e2e test config file

[2025-03-13T09:17:51.985Z]   Expected success, but got an error:

[2025-03-13T09:17:51.985Z]       <*errors.withStack | 0xc000b80cc0>: 

[2025-03-13T09:17:51.985Z]       failed resolving release url "{go://github.com/metal3-io/[email protected]}": failed to find releases tagged with a valid semantic version number

[2025-03-13T09:17:51.985Z]       {

[2025-03-13T09:17:51.985Z]           error: <*errors.withMessage | 0xc000aafee0>{

[2025-03-13T09:17:51.985Z]               cause: <*errors.fundamental | 0xc000b80c90>{

[2025-03-13T09:17:51.985Z]                   msg: "failed to find releases tagged with a valid semantic version number",

[2025-03-13T09:17:51.985Z]                   stack: [0x3720925, 0x371ff1e, 0x371fc4b, 0x371f519, 0x39b3a75, 0x397d29c, 0x1735346, 0x1734459, 0x1b8568e, 0x1b959ee, 0x1b9921b, 0x16b1101],

[2025-03-13T09:17:51.985Z]               },

[2025-03-13T09:17:51.985Z]               msg: "failed resolving release url \"{go://github.com/metal3-io/[email protected]}\"",

[2025-03-13T09:17:51.985Z]           },

[2025-03-13T09:17:51.985Z]           stack: [0x371fdcb, 0x371f519, 0x39b3a75, 0x397d29c, 0x1735346, 0x1734459, 0x1b8568e, 0x1b959ee, 0x1b9921b, 0x16b1101],

[2025-03-13T09:17:51.985Z]       }

I don't get it. We have proper tags there. What is the issue? 🤔

This is failing because it attempts to find the metadata.yaml file in the v1.9.4 release of IPAM, treating it as a CAPI provider. However, our IPAM was not a provider in v1.9, so the file does not exist in that release (see https://github.com/metal3-io/ip-address-manager/releases/tag/v1.9.4). Starting from v1.10, it will be available.

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 5dd8f6e to 30f2e5f Compare March 13, 2025 10:44
@lentzi90
Copy link
Member Author

This is failing because it attempts to find the metadata.yaml file in the v1.9.4 release of IPAM, treating it as a CAPI provider. However, our IPAM was not a provider in v1.9, so the file does not exist in that release (see https://github.com/metal3-io/ip-address-manager/releases/tag/v1.9.4). Starting from v1.10, it will be available.

Good catch! We will need to do it with an override then I guess?

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 30f2e5f to 70bd8c9 Compare March 13, 2025 10:52
@adilGhaffarDev
Copy link
Member

This is failing because it attempts to find the metadata.yaml file in the v1.9.4 release of IPAM, treating it as a CAPI provider. However, our IPAM was not a provider in v1.9, so the file does not exist in that release (see https://github.com/metal3-io/ip-address-manager/releases/tag/v1.9.4). Starting from v1.10, it will be available.

Good catch! We will need to do it with an override then I guess?

yes

@lentzi90
Copy link
Member Author

I wonder how much difference there really is in the manifests. Going to do a test run with just added metadata and see how it goes.
/test metal3-e2e-clusterctl-upgrade-test-main

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 70bd8c9 to 13885d1 Compare March 14, 2025 07:55
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

@adilGhaffarDev
Copy link
Member

this is probably still going to fail because the code that is resolving these markers: {go://github.com/metal3-io/[email protected]} does not check sourcePath. It checks metadata in the actual release, and this is to make sure a valid release is available.

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 13885d1 to 09ad410 Compare March 14, 2025 08:56
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 09ad410 to f62fdf8 Compare March 14, 2025 09:51
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from f62fdf8 to 3b3c78b Compare March 14, 2025 12:01
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

The Metal3 IPAM was previously bundled with CAPM3. Now we deploy it
separately as a CAPI IPAM provider. In clusterctl upgrade tests going
from a version where IPAM is bundled, to a version where it is not, we
must install it after the upgrade.
This commit adds a post upgrade hook to the clusterctl upgrade tests
that installs the Metal3 IPAM.

Signed-off-by: Lennart Jern <[email protected]>
@lentzi90 lentzi90 force-pushed the lentzi90/clusterctl-upgrade-ipam branch from 3b3c78b to d97c33f Compare March 14, 2025 12:32
@lentzi90
Copy link
Member Author

/test metal3-e2e-clusterctl-upgrade-test-main

@metal3-io-bot
Copy link
Contributor

@lentzi90: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
metal3-e2e-clusterctl-upgrade-test-main d97c33f link false /test metal3-e2e-clusterctl-upgrade-test-main

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@metal3-io-bot
Copy link
Contributor

@lentzi90: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
metal3-e2e-clusterctl-upgrade-test-main d97c33f link false /test metal3-e2e-clusterctl-upgrade-test-main

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants