Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for TCP_UDP to NLB TargetGroups and Listeners #2275

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

amorey
Copy link

@amorey amorey commented Oct 7, 2021

Issue

#1608 (comment)

Description

Previously, aws-load-balancer-controller ignored extra overlapping
ServicePorts defined in the Kubernetes Service spec if the external port
numbers were the same even if the protocols were different (e.g. TCP:53,
UDP:53).

This behavior prevented users from exposing services that support TCP
and UDP on the same external load balancer port number.

This patch solves the problem by detecting when a user defines multiple
ServicePorts for the same external load balancer port number but using
TCP and UDP protocols separately. In such situations, a TCP_UDP
TargetGroup and Listener are created and SecurityGroup rules are
updated accordingly. If more than two ServicePorts are defined, only the
first two mergeable ServicePorts are used. Otherwise, the first
ServicePort is used.

Checklist

  • Added tests that cover your change (if possible)
  • Added/modified documentation as required (such as the README.md, or the docs directory)
  • Manually tested
  • Made sure the title of the PR is a good description that can go into the release notes

BONUS POINTS checklist: complete for good vibes and maybe prizes?! 🤯

  • Backfilled missing tests for code in same general area 🎉
  • Refactored something and made the world a better place 🌟

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please sign in with your organization's credentials at https://identity.linuxfoundation.org/projects/cncf to be authorized.
  • If you have done the above and are still having issues with the CLA being reported as unsigned, please log a ticket with the Linux Foundation Helpdesk: https://support.linuxfoundation.org/
  • Should you encounter any issues with the Linux Foundation Helpdesk, send a message to the backup e-mail support address at: [email protected]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Oct 7, 2021
@k8s-ci-robot
Copy link
Contributor

Welcome @amorey!

It looks like this is your first PR to kubernetes-sigs/aws-load-balancer-controller 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/aws-load-balancer-controller has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Oct 7, 2021
@k8s-ci-robot
Copy link
Contributor

Hi @amorey. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Oct 7, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: amorey
To complete the pull request process, please assign kishorj after the PR has been reviewed.
You can assign the PR to them by writing /assign @kishorj in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Yasumoto
Copy link
Contributor

Yasumoto commented Oct 7, 2021

@amorey Thanks for taking this on! I'm going to try to test it out 🎉

In the meantime any chance you could sign the CLA? 🙏🏼

@amorey
Copy link
Author

amorey commented Oct 7, 2021

@Yasumoto Just signed the CLA. Let me know if you have any feedback!

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Oct 7, 2021
@Yasumoto
Copy link
Contributor

Yasumoto commented Oct 8, 2021

Hm, it looks like the NLB was created successfully, but I think I'm running into aws/containers-roadmap#512

I'm getting this error (slightly edited for brevity) when listening on port 514 on TCP+UDP (for syslog)

Error: UPGRADE FAILED: failed to create resource: Service "vector01" is invalid: spec.ports: Invalid value: []core.ServicePort{Name:"syslog-tcp", Protocol:"TCP", AppProtocol:(*string)(nil), Port:514, TargetPort:intstr.IntOrString{Type:0, IntVal:514, StrVal:""}, NodePort:0}, core.ServicePort{Name:"syslog-udp", Protocol:"UDP", AppProtocol:(*string)(nil), Port:514, TargetPort:intstr.IntOrString{Type:0, IntVal:514, StrVal:""}, NodePort:0}}: may not contain more than 1 protocol when type is 'LoadBalancer'

Which I think is from a gated validation check. I'm running on EKS, so not entirely sure if I can tweak the right knobs to turn that on, so suggestions appreciated! 🙇🏼

[edit]

And to be clear, this does look like it worked correctly! Curious how the Service object made it far enough to get processed by the Controller tho despite the validation failing, since I don't see a corresponding Service object in the cluster 🤔

❯ aws elbv2 --region=us-west-2 describe-listeners --load-balancer-arn arn:aws:elasticloadbalancing:us-west-2:891116894530:loadbalancer/net/vector01/e5af38562e88392a | jq .Listeners[1]
{
  "ListenerArn": "arn:aws:elasticloadbalancing:us-west-2:891116894530:listener/net/vector01/e5af38562e88392a/2be85133dc56b9c2",
  "LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:891116894530:loadbalancer/net/vector01/e5af38562e88392a",
  "Port": 514,
  "Protocol": "TCP_UDP",
  "DefaultActions": [
    {
      "Type": "forward",
      "TargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:891116894530:targetgroup/k8s-sw56prod-vector01-3f6a6cb913/7420c8dac5b4f50c",
      "Order": 1,
      "ForwardConfig": {
        "TargetGroups": [
          {
            "TargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:891116894530:targetgroup/k8s-sw56prod-vector01-3f6a6cb913/7420c8dac5b4f50c"
          }
        ]
      }
    }
  ]
}

@Yasumoto
Copy link
Contributor

Yasumoto commented Oct 8, 2021

Yep, looks like it's very new, and disabled by default in upstream 1.20/1.21. Just to confirm I'm trying to see what my EKS apiserver was started with, but seems reasonable this wouldn't be enabled until after this lands.

@amorey
Copy link
Author

amorey commented Oct 8, 2021

Ahh sorry, I forgot to mention that you have to enable the MixedProtocolLBService feature gate in order for everything to work. I'm using a self-managed cluster set up with kops and I have it enabled there but I haven't tried with EKS and don't know how to enable feature flags there.

What's the next step?

@TBBle
Copy link
Contributor

TBBle commented Oct 8, 2021

The failure log came from an update. I assume you added the port during the update operation?

I'm wondering if the AWS Load Balancer Controller somehow saw the updated structure before the update passed validation, acted on it, and did not see the rollback, so did not remove/revert the NLB.

If the feature gate is disabled, we should never see such TCP_UDP NLBs, but unless something else happened, it looks like an underlying issue (not related to this code specifically) if AWS LB Controller sees and acts on updates that won't or didn't pass validation, and hence were never persisted.

A similar validation path is used for example if you remove all the ports from a Service, so it might be useful to confirm that if you do that in an update, AWS LB Controller sees that change, in which case it's a lower-level issue which should be pursued separately.


Apart from that weirdness, I'd say this PR needs to be merged before EKS enables that feature gate (if they agree to do so while it's Alpha), as the feature gate's job is to reject such Services before the supporting infrastructure is ready. It won't go Beta (i.e. enabled by default) until

  • All of the major clouds support this or indicate non-support properly
  • Kube-proxy does not proxy on ports that are in an error state

and without this PR, I don't see any code here (or in the Cloud Provider AWS version) that would reject a mixed-protocol LB, instead I think it would end up generating a bad request for an LB with both TCP and UDP listeners on the same port, which would be "improper" non-support.

That suggests the other thing that should be done before that feature gate goes Beta is adding code to handle and reject such Services to the Cloud Provider AWS implementation of LB support, ideally directing people to use this Controller instead for that and other neat features.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 2, 2021
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Dec 2, 2021

CLA Signed

The committers are authorized under a signed CLA.

  • ✅ M00nF1sh (6669aa8237e5e1e19f73071257f3dd9d78f10d7b)
  • ✅ Andres Morey (c906136b1795649389266b6c257fc9d7b9e4049f)

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 2, 2021
M00nF1sh and others added 2 commits February 1, 2022 16:21
Previously, aws-load-balancer-controller ignored extra overlapping
ServicePorts defined in the Kubernetes Service spec if the external port
numbers were the same even if the protocols were different (e.g. TCP:53,
UDP:53).

This behavior prevented users from exposing services that support TCP
and UDP on the same external load balancer port number.

This patch solves the problem by detecting when a user defines multiple
ServicePorts for the same external load balancer port number but using
TCP and UDP protocols separately. In such situations, a TCP_UDP
TargetGroup and Listener are created and SecurityGroup rules are
updated accordingly. If more than two ServicePorts are defined, only the
first two mergeable ServicePorts are used. Otherwise, the first
ServicePort is used.
if _, ok := mergeableProtocols[port.Protocol]; !ok {
continue
}
if port.NodePort == port0.NodePort && port.Protocol != port0.Protocol {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that if this is an nlb-ip LB, NodePort is the wrong test, as the code in model_build_target_group.go will be using the TargetPort as its destination, e.g. https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/pkg/ingress/model_build_target_group.go#L59-L62

The code that processes the annotations and determines the target type is happening much later (the call to buildTargetGroup via buildListener immediately after this method is called, so addressing this might require pulling the port-merging to later, or the annotation-reading earlier.

Copy link
Contributor

@TBBle TBBle Feb 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IP-target mode is going to introduce a difficult edge-case:

  • The target port can be named
  • The port names are unique in the service, i.e. a TCP and UDP port will have different names, even if they end up at the same port number on the target Pod.
  • Different pods can have different numeric values for the same target port

hence, it's possible to produce the situation that some of the pods have their TCP and UDP listeners on the same port, and some of the pods have their TCP and UDP listeners on different ports.

At this level, a target port equality test would never pass (Edit: I had the logic backwards earlier), as it'd be seeing the names, numeric lookup for named ports is done later ((m *defaultNetworkingManager) computeNumericalPorts for bindings, and implicitly by the Endpoints and consumed by (m *defaultResourceManager) registerPodEndpoints I think).

So I don't see a way to identify that any pair of named target ports in ip-target mode can be merged without some further hint from the user that, e.g. dns-udp and dns-tcp targetPorts will be the same port when resolved on all Pods in the Service; the same service port might be a good hint, but maybe not reliable?

Copy link
Contributor

@TBBle TBBle Feb 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible for the logic in (m *defaultResourceManager) registerPodEndpoints and the defaultEndpointResolver APIs it calls, to validate that when given a numeric port, that for TCP_UDP two ports exist on the service, and that for non-TCP_UDP, the right one is seen (which means adding the protocol to the ServiceRef in TargetGroupBinding as right now it's not specified if the numeric port is TCP or UDP).

When given a named port for TCP_UDP (if a way can be found to reliably match/merge them), it really should be a pair of names (as the TCP and UDP ports will have different names) and validate that both exist on the service.

That suggests that ServiceRef in TargetGroupBinding would need to be able to list multiple ports.

And then for TCP_UDP, exclude any resulting endpoints where the resulting port for TCP and UDP are different. Right now that won't be checked as for a numeric port, it'll take the first port name that has the right port number, irrespective of TCP or UDP) and for a named port, only one name is currently preserved through the pipeline.

Copy link
Contributor

@TBBle TBBle Feb 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The simplest way forward of-course is to initially limit TCP_UDP support to instance-target NLBs pending further design work. That still suffers from my first comment here, that we don't know at this point that it's an instance-target NLB.

Copy link
Contributor

@TBBle TBBle Feb 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to throw another small spanner in the works, spec.allocateLoadBalancerNodePorts may allow the user to make this value (port.NodePort and port0.NodePort) be 0.

That only makes sense for ip-target NLBs though, as instance-target NLBs rely on a NodePort. I'm not sure if the AWS Load Balancer Controller is going to be able to override that setting for instance-target NLBs, but a NodePort equality test should probably never consider two NodePort==0 ports as equal, to avoid making an invalid situation worse.

continue
}
if port.NodePort == port0.NodePort && port.Protocol != port0.Protocol {
port0.Protocol = corev1.Protocol("TCP_UDP")
Copy link
Contributor

@TBBle TBBle Feb 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't add a comment there, but will this require flipping the test tgProtocol != elbv2model.ProtocolUDP in buildListenerSpec for TLS support to be tgProtocol == elbv2model.ProtocolTCP (not sure why the existing test would allow SCTP to be turned into TLS, but but perhaps !=UDP && != TCP_UDP is more consistent with current behaviour). Otherwise a merged TCP_UDP service would be transformed destructively (silently losing the UDP port) into a elbv2model.ProtocolTLS if provided certificate ARNS and it matched the SSL ports.

In fact, it might be better to throw an error if that case happens, as the merge is between a TCP service that should be converted to a TLS service, and a UDP service that cannot be so-converted, and I don't see anything in the docs that suggests a TLS_UDP protocol option, or a TLS_DTLS protocol option which would be even nicer.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TLS and UDP are on different levels. So that seems like a non-problem.

@TBBle
Copy link
Contributor

TBBle commented Feb 1, 2022

Also, quick note, it looks like your rebase has ended up with an empty-commit duplicate of 9b3880f (see f78cf1e)

Comment on lines +1058 to +1067
Ports: []elbv2api.NetworkingPort{
{
Protocol: &networkingProtocolTCP,
Port: &port80,
},
{
Protocol: &networkingProtocolUDP,
Port: &port80,
},
},
Copy link
Contributor

@TBBle TBBle Feb 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this example, I notice that NetworkingProtocolTCP_UDP has been added to the TargetGroupBindings type-list, but we haven't actually supported it, instead immediately breaking it up into a pair of ports, TCP and UDP, when creating a TargetGroupBinding resource for the LB in buildTargetGroupBindingNetworking.

This might catch-out users who're directly creating their own TargetGroupBinding resources, as the spec points back to that type, although the docs don't actually list the possible values (but probably should, as TargetType does for example.)

It would be more-consistent to either not add NetworkingProtocolTCP_UDP (as it's only used briefly inside buildTargetGroupBindingNetworking), or fully support it in computePermissionsForPeerPort in pkg/targetgroupbinding/networking_manager.go, which (on brief glance) appears to be the only place that cares about this ports list.

The former is certainly easier, and requiring such users, if using a TCP_UDP target group, to specify port lists for both TCP and UDP separately doesn't seem like a great hardship. That would also simplify buildTargetGroupBindingNetworking by merging the two switch statements.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 9, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 11, 2022
@z0rc
Copy link

z0rc commented Dec 12, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2023
@z0rc
Copy link

z0rc commented Mar 12, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2023
@celliso1
Copy link

celliso1 commented May 3, 2023

Some public attention to TCP_UDP support was directed to #1608 instead of here. This feature is still important, especially for the SIP protocol. It is helpful if SIP/UDP can switch to SIP/TCP when a message exceeds MTU, at the same load balancer IP address.

@joegraviton
Copy link

+1 for this feature. Some info for our use case:

Before this feature is landing, does any one find an easy workaround, please ?

@TBBle
Copy link
Contributor

TBBle commented May 8, 2023

As a workaround, you might be able to use an externally-managed TCP_UDP LB and bind it to a non-LB Service, see https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/use_cases/self_managed_lb/ and https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/targetgroupbinding/targetgroupbinding/.

The same suggestion was made in #2759 (comment) by someone who knows more about the AWS LB Controller, so it should work.

However, based on this PR's implementation, you might need to explicitly create two NetworkingPort in your TargetGroupBinding since you need one for TCP and one for UDP.

I think in terms of this PR, having NetworkingPort support TCP_UDP (and being the default) would be better since it would make the obvious case work by default, but that's probably a separate discussion (and either a separate PR, or a replacement/rework of this PR).

@artyom-p
Copy link

Any progress?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2024
@TBBle
Copy link
Contributor

TBBle commented Jan 20, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@rakesh-von
Copy link

any progress?

@tylern91
Copy link

@amorey Any progress on this feature? I would believe many people were waiting for this feature release.

@amorey
Copy link
Author

amorey commented Mar 25, 2024

@tylern91 Sorry, I haven't looked at this in a while. It would have taken a while to implement the reviewer's suggestions so I kept using my custom version and since then it's fallen behind the master. If someone else wants to pick up the baton I'd be happy to help. Otherwise, I'm not sure I'll have the time to work on this for a while.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2024
@z0rc
Copy link

z0rc commented Jun 23, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2024
@taliesins
Copy link

@oliviassss - is this something you could help us with? Not sure why this issue appears to be abandoned. Being able to support HTTP3 is probably a good reason to prioritise getting this ticket done.

@lyda
Copy link

lyda commented Aug 12, 2024

Any progress on this? Is help needed?

@lyda
Copy link

lyda commented Oct 18, 2024

I've rebased this in #3807. Will review @TBBle 's feedback.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.