Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to prefix provisioningClassName to filter provisioning requests #7676

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 24 additions & 3 deletions cluster-autoscaler/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -629,6 +629,19 @@ When using this class, Cluster Autoscaler performs following actions:
Adds a Provisioned=True condition to the ProvReq if capacity is available.
Adds a BookingExpired=True condition when the 10-minute reservation period expires.

Since Cluster Autoscaler version 1.33, it is possible to configure the autoscaler
to process only subset of check capacity ProvisioningRequests and ignore the rest.
It should be done with caution by specifying `--check-capacity-processor-instance=<name>` flag.
Then, ProvReq Parameters map should contain a key "processorInstance" with a value equal to the configured instance name.

This allows to run two Cluster Autoscalers in the cluster, but the second instance (likely this with configured instance name)
**should only** handle check capacity ProvisioningRequests and not overlap node groups with the main instance.
It is responsibility of the user to ensure the capacity checks are not overlapping.
Best-effort atomic ProvisioningRequests processing is disabled in the instance that has this flag set.

For backwards compatibility, it is possible to differentiate the ProvReqs by prefixing provisioningClassName with the instance name,
but it is **not recommended** and will be removed in CA 1.35.

* `best-effort-atomic-scale-up.autoscaling.x-k8s.io` (supported from Cluster Autoscaler version 1.30.2 or later).
When using this class, Cluster Autoscaler performs following actions:

Expand Down Expand Up @@ -735,12 +748,12 @@ setting the following flag in your Cluster Autoscaler configuration:
3. **Batch Size**: Set the maximum number of CheckCapacity ProvisioningRequests
to process in a single iteration by setting the following flag in your Cluster
Autoscaler configuration:
`--max-batch-size=<batch-size>`. The default value is 10.
`--check-capacity-provisioning-request-max-batch-size=<batch-size>`. The default value is 10.

4. **Batch Timebox**: Set the maximum time in seconds that Cluster Autoscaler will
spend processing CheckCapacity ProvisioningRequests in a single iteration by
setting the following flag in your Cluster Autoscaler configuration:
`--batch-timebox=<timebox>`. The default value is 10s.
`--check-capacity-provisioning-request-batch-timebox=<timebox>`. The default value is 10s.

****************

Expand Down Expand Up @@ -973,13 +986,15 @@ The following startup parameters are supported for cluster autoscaler:
| `bulk-mig-instances-listing-enabled` | Fetch GCE mig instances in bulk instead of per mig | |
| `bypassed-scheduler-names` | Names of schedulers to bypass. If set to non-empty value, CA will not wait for pods to reach a certain age before triggering a scale-up. | |
| `check-capacity-batch-processing` | Whether to enable batch processing for check capacity requests. | |
| `check-capacity-processor-instance` | Name of the processor instance. Only ProvisioningRequests that define this name in their parameters with the key "processorInstance" will be processed by this CA instance. It only refers to check capacity ProvisioningRequests, but if not empty, best-effort atomic ProvisioningRequests processing is disabled in this instance. Not recommended: Until CA 1.35, ProvisioningRequests with this name as prefix in their class will be also processed. | |
| `check-capacity-provisioning-request-batch-timebox` | Maximum time to process a batch of provisioning requests. | 10s |
| `check-capacity-provisioning-request-max-batch-size` | Maximum number of provisioning requests to process in a single batch. | 10 |
| `cloud-config` | The path to the cloud provider configuration file. Empty string for no configuration file. | |
| `cloud-provider` | Cloud provider type. Available values: [aws,azure,gce,alicloud,cherryservers,cloudstack,baiducloud,magnum,digitalocean,exoscale,externalgrpc,huaweicloud,hetzner,oci,ovhcloud,clusterapi,ionoscloud,kamatera,kwok,linode,bizflycloud,brightbox,equinixmetal,vultr,tencentcloud,civo,scaleway,rancher,volcengine] | "gce" |
| `cloud-provider-gce-l7lb-src-cidrs` | CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks | 130.211.0.0/22,35.191.0.0/16 |
| `cloud-provider-gce-lb-src-cidrs` | CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks | 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 |
| `cluster-name` | Autoscaled cluster name, if available | |
| `cluster-snapshot-parallelism` | Maximum parallelism of cluster snapshot creation. | 16 |
| `clusterapi-cloud-config-authoritative` | Treat the cloud-config flag authoritatively (do not fallback to using kubeconfig flag). ClusterAPI only | |
| `cordon-node-before-terminating` | Should CA cordon nodes before terminating during downscale process | |
| `cores-total` | Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. | "0:320000" |
Expand Down Expand Up @@ -1015,7 +1030,13 @@ The following startup parameters are supported for cluster autoscaler:
| `kube-client-qps` | QPS value for kubernetes client. | 5 |
| `kubeconfig` | Path to kubeconfig file with authorization and master location information. | |
| `kubernetes` | Kubernetes master location. Leave blank for default | |
| `lease-resource-name` | The lease resource to use in leader election. | "cluster-autoscaler" |
| `leader-elect` | Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. | true |
| `leader-elect-lease-duration` | The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. | 15s |
| `leader-elect-renew-deadline` | The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled. | 10s |
| `leader-elect-resource-lock` | The type of resource object that is used for locking during leader election. Supported options are 'leases'. | "leases" |
| `leader-elect-resource-name` | The name of resource object that is used for locking during leader election. | "cluster-autoscaler" |
| `leader-elect-resource-namespace` | The namespace of resource object that is used for locking during leader election. | |
| `leader-elect-retry-period` | The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. | 2s |
| `log-backtrace-at` | when logging hits line file:N, emit a stack trace | :0 |
| `log-dir` | If non-empty, write log files in this directory (no effect when -logtostderr=true) | |
| `log-file` | If non-empty, use this log file (no effect when -logtostderr=true) | |
Expand Down
5 changes: 5 additions & 0 deletions cluster-autoscaler/config/autoscaling_options.go
Original file line number Diff line number Diff line change
Expand Up @@ -313,6 +313,11 @@ type AutoscalingOptions struct {
DynamicResourceAllocationEnabled bool
// ClusterSnapshotParallelism is the maximum parallelism of cluster snapshot creation.
ClusterSnapshotParallelism int
// CheckCapacityProcessorInstance is the name of the processor instance.
// Only ProvisioningRequests that define this name in their parameters with the key "processorInstance" will be processed by this CA instance.
// It only refers to check capacity ProvisioningRequests, but if not empty, best-effort atomic ProvisioningRequests processing is disabled in this instance.
// Not recommended: Until CA 1.35, ProvisioningRequests with this name as prefix in their class will be also processed.
CheckCapacityProcessorInstance string
}

// KubeClientOptions specify options for kube client
Expand Down
6 changes: 4 additions & 2 deletions cluster-autoscaler/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -283,6 +283,7 @@ var (
forceDeleteLongUnregisteredNodes = flag.Bool("force-delete-unregistered-nodes", false, "Whether to enable force deletion of long unregistered nodes, regardless of the min size of the node group the belong to.")
enableDynamicResourceAllocation = flag.Bool("enable-dynamic-resource-allocation", false, "Whether logic for handling DRA (Dynamic Resource Allocation) objects is enabled.")
clusterSnapshotParallelism = flag.Int("cluster-snapshot-parallelism", 16, "Maximum parallelism of cluster snapshot creation.")
checkCapacityProcessorInstance = flag.String("check-capacity-processor-instance", "", "Name of the processor instance. Only ProvisioningRequests that define this name in their parameters with the key \"processorInstance\" will be processed by this CA instance. It only refers to check capacity ProvisioningRequests, but if not empty, best-effort atomic ProvisioningRequests processing is disabled in this instance. Not recommended: Until CA 1.35, ProvisioningRequests with this name as prefix in their class will be also processed.")
)

func isFlagPassed(name string) bool {
Expand Down Expand Up @@ -464,6 +465,7 @@ func createAutoscalingOptions() config.AutoscalingOptions {
ForceDeleteLongUnregisteredNodes: *forceDeleteLongUnregisteredNodes,
DynamicResourceAllocationEnabled: *enableDynamicResourceAllocation,
ClusterSnapshotParallelism: *clusterSnapshotParallelism,
CheckCapacityProcessorInstance: *checkCapacityProcessorInstance,
}
}

Expand Down Expand Up @@ -539,7 +541,7 @@ func buildAutoscaler(context ctx.Context, debuggingSnapshotter debuggingsnapshot
return nil, nil, err
}

ProvisioningRequestInjector, err = provreq.NewProvisioningRequestPodsInjector(restConfig, opts.ProvisioningRequestInitialBackoffTime, opts.ProvisioningRequestMaxBackoffTime, opts.ProvisioningRequestMaxBackoffCacheSize, opts.CheckCapacityBatchProcessing)
ProvisioningRequestInjector, err = provreq.NewProvisioningRequestPodsInjector(restConfig, opts.ProvisioningRequestInitialBackoffTime, opts.ProvisioningRequestMaxBackoffTime, opts.ProvisioningRequestMaxBackoffCacheSize, opts.CheckCapacityBatchProcessing, opts.CheckCapacityProcessorInstance)
if err != nil {
return nil, nil, err
}
Expand All @@ -558,7 +560,7 @@ func buildAutoscaler(context ctx.Context, debuggingSnapshotter debuggingsnapshot

scaleUpOrchestrator := provreqorchestrator.NewWrapperOrchestrator(provreqOrchestrator)
opts.ScaleUpOrchestrator = scaleUpOrchestrator
provreqProcesor := provreq.NewProvReqProcessor(client)
provreqProcesor := provreq.NewProvReqProcessor(client, opts.CheckCapacityProcessorInstance)
opts.LoopStartNotifier = loopstart.NewObserversList([]loopstart.Observer{provreqProcesor})

podListProcessor.AddProcessor(provreqProcesor)
Expand Down
49 changes: 26 additions & 23 deletions cluster-autoscaler/processors/provreq/injector.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ type ProvisioningRequestPodsInjector struct {
client *provreqclient.ProvisioningRequestClient
lastProvisioningRequestProcessTime time.Time
checkCapacityBatchProcessing bool
checkCapacityProcessorInstance string
}

// IsAvailableForProvisioning checks if the provisioning request is the correct state for processing and provisioning has not been attempted recently.
Expand Down Expand Up @@ -93,16 +94,28 @@ func (p *ProvisioningRequestPodsInjector) MarkAsFailed(pr *provreqwrapper.Provis
p.UpdateLastProcessTime()
}

func (p *ProvisioningRequestPodsInjector) isSupportedClass(pr *provreqwrapper.ProvisioningRequest) bool {
return provisioningrequest.SupportedProvisioningClass(pr.ProvisioningRequest, p.checkCapacityProcessorInstance)
}

func (p *ProvisioningRequestPodsInjector) isSupportedCheckCapacityClass(pr *provreqwrapper.ProvisioningRequest) bool {
return provisioningrequest.SupportedCheckCapacityClass(pr.ProvisioningRequest, p.checkCapacityProcessorInstance)
}

func (p *ProvisioningRequestPodsInjector) shouldMarkAsAccepted(pr *provreqwrapper.ProvisioningRequest) bool {
// Don't mark as accepted the check capacity ProvReq when batch processing is enabled.
// It will be marked later, in parallel, during processing the requests.
return !p.checkCapacityBatchProcessing || !p.isSupportedCheckCapacityClass(pr)
}

// GetPodsFromNextRequest picks one ProvisioningRequest meeting the condition passed using isSupportedClass function, marks it as accepted and returns pods from it.
func (p *ProvisioningRequestPodsInjector) GetPodsFromNextRequest(
isSupportedClass func(*provreqwrapper.ProvisioningRequest) bool,
) ([]*apiv1.Pod, error) {
func (p *ProvisioningRequestPodsInjector) GetPodsFromNextRequest() ([]*apiv1.Pod, error) {
provReqs, err := p.client.ProvisioningRequests()
if err != nil {
return nil, err
}
for _, pr := range provReqs {
if !isSupportedClass(pr) {
if !p.isSupportedClass(pr) {
continue
}

Expand All @@ -117,16 +130,13 @@ func (p *ProvisioningRequestPodsInjector) GetPodsFromNextRequest(
p.MarkAsFailed(pr, provreqconditions.FailedToCreatePodsReason, err.Error())
continue
}
// Don't mark as accepted the check capacity ProvReq when batch processing is enabled.
// It will be marked later, in parallel, during processing the requests.
if pr.Spec.ProvisioningClassName == v1.ProvisioningClassCheckCapacity && p.checkCapacityBatchProcessing {
p.UpdateLastProcessTime()
if p.shouldMarkAsAccepted(pr) {
if err := p.MarkAsAccepted(pr); err != nil {
continue
}
return podsFromProvReq, nil
}
if err := p.MarkAsAccepted(pr); err != nil {
continue
}

p.UpdateLastProcessTime()
return podsFromProvReq, nil
}
return nil, nil
Expand All @@ -152,7 +162,7 @@ func (p *ProvisioningRequestPodsInjector) GetCheckCapacityBatch(maxPrs int) ([]P
if len(prsWithPods) >= maxPrs {
break
}
if pr.Spec.ProvisioningClassName != v1.ProvisioningClassCheckCapacity {
if !p.isSupportedCheckCapacityClass(pr) {
continue
}
if !p.IsAvailableForProvisioning(pr) {
Expand All @@ -175,15 +185,7 @@ func (p *ProvisioningRequestPodsInjector) Process(
_ *context.AutoscalingContext,
unschedulablePods []*apiv1.Pod,
) ([]*apiv1.Pod, error) {
podsFromProvReq, err := p.GetPodsFromNextRequest(
func(pr *provreqwrapper.ProvisioningRequest) bool {
_, found := provisioningrequest.SupportedProvisioningClasses[pr.Spec.ProvisioningClassName]
if !found {
klog.Warningf("Provisioning Class %s is not supported for ProvReq %s/%s", pr.Spec.ProvisioningClassName, pr.Namespace, pr.Name)
}
return found
})

podsFromProvReq, err := p.GetPodsFromNextRequest()
if err != nil {
return unschedulablePods, err
}
Expand All @@ -195,7 +197,7 @@ func (p *ProvisioningRequestPodsInjector) Process(
func (p *ProvisioningRequestPodsInjector) CleanUp() {}

// NewProvisioningRequestPodsInjector creates a ProvisioningRequest filter processor.
func NewProvisioningRequestPodsInjector(kubeConfig *rest.Config, initialBackoffTime, maxBackoffTime time.Duration, maxCacheSize int, checkCapacityBatchProcessing bool) (*ProvisioningRequestPodsInjector, error) {
func NewProvisioningRequestPodsInjector(kubeConfig *rest.Config, initialBackoffTime, maxBackoffTime time.Duration, maxCacheSize int, checkCapacityBatchProcessing bool, checkCapacityProcessorInstance string) (*ProvisioningRequestPodsInjector, error) {
client, err := provreqclient.NewProvisioningRequestClient(kubeConfig)
if err != nil {
return nil, err
Expand All @@ -208,6 +210,7 @@ func NewProvisioningRequestPodsInjector(kubeConfig *rest.Config, initialBackoffT
clock: clock.RealClock{},
lastProvisioningRequestProcessTime: time.Now(),
checkCapacityBatchProcessing: checkCapacityBatchProcessing,
checkCapacityProcessorInstance: checkCapacityProcessorInstance,
}, nil
}

Expand Down
Loading
Loading