Skip to content

feat: capacity block support #8011

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -721,6 +721,22 @@ spec:
description: The ID of the AWS account that owns the capacity reservation.
pattern: ^[0-9]{12}$
type: string
reservationType:
default: default
description: The type of capacity reservation.
enum:
- default
- capacity-block
type: string
state:
default: active
description: |-
The state of the capacity reservation. A capacity reservation is considered to be expiring if it is within the EC2
reclaimation window. Only capacity-block reservations may be in this state.
enum:
- active
- expiring
type: string
required:
- availabilityZone
- id
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ spec:
- message: label "kubernetes.io/hostname" is restricted
rule: self != "kubernetes.io/hostname"
- message: label domain "karpenter.k8s.aws" is restricted
rule: self in ["karpenter.k8s.aws/capacity-reservation-id", "karpenter.k8s.aws/ec2nodeclass", "karpenter.k8s.aws/instance-encryption-in-transit-supported", "karpenter.k8s.aws/instance-category", "karpenter.k8s.aws/instance-hypervisor", "karpenter.k8s.aws/instance-family", "karpenter.k8s.aws/instance-generation", "karpenter.k8s.aws/instance-local-nvme", "karpenter.k8s.aws/instance-size", "karpenter.k8s.aws/instance-cpu", "karpenter.k8s.aws/instance-cpu-manufacturer", "karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz", "karpenter.k8s.aws/instance-memory", "karpenter.k8s.aws/instance-ebs-bandwidth", "karpenter.k8s.aws/instance-network-bandwidth", "karpenter.k8s.aws/instance-gpu-name", "karpenter.k8s.aws/instance-gpu-manufacturer", "karpenter.k8s.aws/instance-gpu-count", "karpenter.k8s.aws/instance-gpu-memory", "karpenter.k8s.aws/instance-accelerator-name", "karpenter.k8s.aws/instance-accelerator-manufacturer", "karpenter.k8s.aws/instance-accelerator-count"] || !self.find("^([^/]+)").endsWith("karpenter.k8s.aws")
rule: self in ["karpenter.k8s.aws/capacity-reservation-type", "karpenter.k8s.aws/capacity-reservation-id", "karpenter.k8s.aws/ec2nodeclass", "karpenter.k8s.aws/instance-encryption-in-transit-supported", "karpenter.k8s.aws/instance-category", "karpenter.k8s.aws/instance-hypervisor", "karpenter.k8s.aws/instance-family", "karpenter.k8s.aws/instance-generation", "karpenter.k8s.aws/instance-local-nvme", "karpenter.k8s.aws/instance-size", "karpenter.k8s.aws/instance-cpu", "karpenter.k8s.aws/instance-cpu-manufacturer", "karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz", "karpenter.k8s.aws/instance-memory", "karpenter.k8s.aws/instance-ebs-bandwidth", "karpenter.k8s.aws/instance-network-bandwidth", "karpenter.k8s.aws/instance-gpu-name", "karpenter.k8s.aws/instance-gpu-manufacturer", "karpenter.k8s.aws/instance-gpu-count", "karpenter.k8s.aws/instance-gpu-memory", "karpenter.k8s.aws/instance-accelerator-name", "karpenter.k8s.aws/instance-accelerator-manufacturer", "karpenter.k8s.aws/instance-accelerator-count"] || !self.find("^([^/]+)").endsWith("karpenter.k8s.aws")
minValues:
description: |-
This field is ALPHA and can be dropped or replaced at any time
Expand Down
4 changes: 2 additions & 2 deletions charts/karpenter-crd/templates/karpenter.sh_nodepools.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ spec:
- message: label "kubernetes.io/hostname" is restricted
rule: self.all(x, x != "kubernetes.io/hostname")
- message: label domain "karpenter.k8s.aws" is restricted
rule: self.all(x, x in ["karpenter.k8s.aws/capacity-reservation-id", "karpenter.k8s.aws/ec2nodeclass", "karpenter.k8s.aws/instance-encryption-in-transit-supported", "karpenter.k8s.aws/instance-category", "karpenter.k8s.aws/instance-hypervisor", "karpenter.k8s.aws/instance-family", "karpenter.k8s.aws/instance-generation", "karpenter.k8s.aws/instance-local-nvme", "karpenter.k8s.aws/instance-size", "karpenter.k8s.aws/instance-cpu", "karpenter.k8s.aws/instance-cpu-manufacturer", "karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz", "karpenter.k8s.aws/instance-memory", "karpenter.k8s.aws/instance-ebs-bandwidth", "karpenter.k8s.aws/instance-network-bandwidth", "karpenter.k8s.aws/instance-gpu-name", "karpenter.k8s.aws/instance-gpu-manufacturer", "karpenter.k8s.aws/instance-gpu-count", "karpenter.k8s.aws/instance-gpu-memory", "karpenter.k8s.aws/instance-accelerator-name", "karpenter.k8s.aws/instance-accelerator-manufacturer", "karpenter.k8s.aws/instance-accelerator-count"] || !x.find("^([^/]+)").endsWith("karpenter.k8s.aws"))
rule: self.all(x, x in ["karpenter.k8s.aws/capacity-reservation-type", "karpenter.k8s.aws/capacity-reservation-id", "karpenter.k8s.aws/ec2nodeclass", "karpenter.k8s.aws/instance-encryption-in-transit-supported", "karpenter.k8s.aws/instance-category", "karpenter.k8s.aws/instance-hypervisor", "karpenter.k8s.aws/instance-family", "karpenter.k8s.aws/instance-generation", "karpenter.k8s.aws/instance-local-nvme", "karpenter.k8s.aws/instance-size", "karpenter.k8s.aws/instance-cpu", "karpenter.k8s.aws/instance-cpu-manufacturer", "karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz", "karpenter.k8s.aws/instance-memory", "karpenter.k8s.aws/instance-ebs-bandwidth", "karpenter.k8s.aws/instance-network-bandwidth", "karpenter.k8s.aws/instance-gpu-name", "karpenter.k8s.aws/instance-gpu-manufacturer", "karpenter.k8s.aws/instance-gpu-count", "karpenter.k8s.aws/instance-gpu-memory", "karpenter.k8s.aws/instance-accelerator-name", "karpenter.k8s.aws/instance-accelerator-manufacturer", "karpenter.k8s.aws/instance-accelerator-count"] || !x.find("^([^/]+)").endsWith("karpenter.k8s.aws"))
type: object
spec:
description: |-
Expand Down Expand Up @@ -283,7 +283,7 @@ spec:
- message: label "kubernetes.io/hostname" is restricted
rule: self != "kubernetes.io/hostname"
- message: label domain "karpenter.k8s.aws" is restricted
rule: self in ["karpenter.k8s.aws/capacity-reservation-id", "karpenter.k8s.aws/ec2nodeclass", "karpenter.k8s.aws/instance-encryption-in-transit-supported", "karpenter.k8s.aws/instance-category", "karpenter.k8s.aws/instance-hypervisor", "karpenter.k8s.aws/instance-family", "karpenter.k8s.aws/instance-generation", "karpenter.k8s.aws/instance-local-nvme", "karpenter.k8s.aws/instance-size", "karpenter.k8s.aws/instance-cpu", "karpenter.k8s.aws/instance-cpu-manufacturer", "karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz", "karpenter.k8s.aws/instance-memory", "karpenter.k8s.aws/instance-ebs-bandwidth", "karpenter.k8s.aws/instance-network-bandwidth", "karpenter.k8s.aws/instance-gpu-name", "karpenter.k8s.aws/instance-gpu-manufacturer", "karpenter.k8s.aws/instance-gpu-count", "karpenter.k8s.aws/instance-gpu-memory", "karpenter.k8s.aws/instance-accelerator-name", "karpenter.k8s.aws/instance-accelerator-manufacturer", "karpenter.k8s.aws/instance-accelerator-count"] || !self.find("^([^/]+)").endsWith("karpenter.k8s.aws")
rule: self in ["karpenter.k8s.aws/capacity-reservation-type", "karpenter.k8s.aws/capacity-reservation-id", "karpenter.k8s.aws/ec2nodeclass", "karpenter.k8s.aws/instance-encryption-in-transit-supported", "karpenter.k8s.aws/instance-category", "karpenter.k8s.aws/instance-hypervisor", "karpenter.k8s.aws/instance-family", "karpenter.k8s.aws/instance-generation", "karpenter.k8s.aws/instance-local-nvme", "karpenter.k8s.aws/instance-size", "karpenter.k8s.aws/instance-cpu", "karpenter.k8s.aws/instance-cpu-manufacturer", "karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz", "karpenter.k8s.aws/instance-memory", "karpenter.k8s.aws/instance-ebs-bandwidth", "karpenter.k8s.aws/instance-network-bandwidth", "karpenter.k8s.aws/instance-gpu-name", "karpenter.k8s.aws/instance-gpu-manufacturer", "karpenter.k8s.aws/instance-gpu-count", "karpenter.k8s.aws/instance-gpu-memory", "karpenter.k8s.aws/instance-accelerator-name", "karpenter.k8s.aws/instance-accelerator-manufacturer", "karpenter.k8s.aws/instance-accelerator-count"] || !self.find("^([^/]+)").endsWith("karpenter.k8s.aws")
minValues:
description: |-
This field is ALPHA and can be dropped or replaced at any time
Expand Down
103 changes: 103 additions & 0 deletions designs/capacity-block-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Capacity Block Support

## Overview

In v1.3.0 Karpenter introduced formal support for on-demand capacity reservations.
However, this did not include a subset of ODCRs: Capacity Blocks.
Capacity Blocks enable users to “reserve highly sought-after GPU instances on a future date to support short duration ML workloads”.
This doc will focus on the extension to Karpenter’s existing ODCR feature to support Capacity Blocks.

## Goals

- Karpenter should enable users to select against Capacity Blocks when scheduling workloads
- Karpenter should discover Capacity Blocks through `ec2nodeclass.spec.capacityReservationSelectorTerms`
- Karpenter should gracefully handle Capacity Block expiration

## API Updates

We will add the `karpenter.k8s.aws/capacity-reservation-type` label, which can take on the values `default` and `capacity-block`.
This mirrors the `reservationType` field in the `ec2:DescribeCapacityReservation` response and will enable users to select on capacity block nodes via NodePool requirements or node selector terms.

```yaml
# Configure a NodePool to only be compatible with instance types with active
# capacity block reservations
kind: NodePool
apiVersion: karpenter.sh/v1
spec:
template:
spec:
requirements:
- key: karpenter.k8s.aws/capacity-reservation-type
operator: In
values: ['capacity-block']
---
# Configure a pod to only schedule against nodes backed by capacity blocks
kind: Pod
apiVersion: v1
spec:
nodeSelector:
karpenter.k8s.aws/capacity-reservation-type: capacity-block
```

Additionally, we will update the NodeClass status to reflect the reservation type and state for a given capacity reservation:

```yaml
kind: EC2NodeClass
apiVersion: karpenter.k8s.aws/v1
status:
capacityReservations:
- # ...
reservationType: Enum (default | capacity-block)
state: Enum (active | expiring)
```

No changes are required for `ec2nodeclass.spec.capacityReservationSelectorTerms`.

## Launch Behavior

Today, when Karpenter creates a NodeClaim targeting reserved capacity, it ensures it is launched into one of the correct reservations by injecting a `karpenter.k8s.aws/capacity-reservation-id` requirement into the NodeClaim.
By injecting this requirement, we ensure Karpenter can maximize flexibility sent to CreateFleet (minimizing risk of ReservedCapacityExceeded errors) while also ensuring Karpenter doesn’t overlaunch into any given reservation.

```yaml
kind: NodeClaim
apiVersion: karpenter.sh/v1
spec:
requirements:
- key: karpenter.k8s.aws/capacity-reservation-id
operator: In
values: ['cr-foo', 'cr-bar']
# ...
```

Given the NodeClaim spec above, Karpenter will create launch templates for both `cr-foo` and `cr-bar`, providing both in the CreateFleet request.
However, this breaks down when we begin to mix default and capacity-block ODCRs (e.g. `cr-foo` is a default capacity reservation, and `cr-bar` is a capacity-block).
This is because the `TargetCapacitySpecificationRequest.DefaultTargetCapacityType` field in the CreateFleet request needs to be set to on-demand or capacity-block, preventing us from mixing them in a single request.
Instead, if a NodeClaim is compatible with both types of ODCRs, we must choose a subset of those ODCRs to include in the CreateFleet request.
We have the following options for prioritization when making this selection:

- Prioritize price (the subset with the “cheapest” offering)
- Prioritize flexibility (the subset with the greatest number of offerings)

Although prioritizing flexibility is desireable to reduce the risk of ReservedCapacityExceeded errors, it won’t interact well with consolidation and result in additional node churn.
For that reason, we should prioritize the set of ODCRs with the “cheapest” offering when generating the CreateFleet request.
If there is a tie between a default and capacity-block offering, we will prioritize the capacity-block offering.

## Interruption

Although capacity blocks are modeled as ODCRs, their expiration behavior differs.
Any capacity still in use when a default ODCR expires falls back to a standard on-demand instance.
On the other hand, instances in use from a capacity block reservation are terminated ahead of their end date.

From the [EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-blocks.html):

> You can use all the instances you reserved until 30 minutes before the end time of the Capacity Block.
> With 30 minutes left in your Capacity Block reservation, we begin terminating any instances that are running in the Capacity Block.
> We use this time to clean up your instances before delivering the Capacity Block to the next customer.
> We emit an event through EventBridge 10 minutes before the termination process begins.
> For more information, see Monitor Capacity Blocks using EventBridge.

Karpenter should gracefully handle this interruption by draining the nodes ahead of termination.
While we could integrate with the EventBridge event referenced above, this introduces complications when rehydrating state after a controller restart.
Instead, we will rely on the fact that interruption occurs at a fixed time relative to the end date of the capacity reservation, which is already discovered via `ec2:DescribeCapacityReservation`.
Matching the time the expiration warning event is emmitted, Karpenter will begin to drain the node 10 minutes before EC2 begins reclaiming the capacity (40 minutes before the end date).
Once the reclaimation period begins, Karpenter will mark the capacity reservation as expiring in the EC2NodeClass' status.
2 changes: 1 addition & 1 deletion hack/validation/labels.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

function injectDomainLabelRestrictions() {
domain=$1
rule="self.all(x, x in [\"${domain}/capacity-reservation-id\", \"${domain}/ec2nodeclass\", \"${domain}/instance-encryption-in-transit-supported\", \"${domain}/instance-category\", \"${domain}/instance-hypervisor\", \"${domain}/instance-family\", \"${domain}/instance-generation\", \"${domain}/instance-local-nvme\", \"${domain}/instance-size\", \"${domain}/instance-cpu\", \"${domain}/instance-cpu-manufacturer\", \"${domain}/instance-cpu-sustained-clock-speed-mhz\", \"${domain}/instance-memory\", \"${domain}/instance-ebs-bandwidth\", \"${domain}/instance-network-bandwidth\", \"${domain}/instance-gpu-name\", \"${domain}/instance-gpu-manufacturer\", \"${domain}/instance-gpu-count\", \"${domain}/instance-gpu-memory\", \"${domain}/instance-accelerator-name\", \"${domain}/instance-accelerator-manufacturer\", \"${domain}/instance-accelerator-count\"] || !x.find(\"^([^/]+)\").endsWith(\"${domain}\"))"
rule="self.all(x, x in [\"${domain}/capacity-reservation-type\", \"${domain}/capacity-reservation-id\", \"${domain}/ec2nodeclass\", \"${domain}/instance-encryption-in-transit-supported\", \"${domain}/instance-category\", \"${domain}/instance-hypervisor\", \"${domain}/instance-family\", \"${domain}/instance-generation\", \"${domain}/instance-local-nvme\", \"${domain}/instance-size\", \"${domain}/instance-cpu\", \"${domain}/instance-cpu-manufacturer\", \"${domain}/instance-cpu-sustained-clock-speed-mhz\", \"${domain}/instance-memory\", \"${domain}/instance-ebs-bandwidth\", \"${domain}/instance-network-bandwidth\", \"${domain}/instance-gpu-name\", \"${domain}/instance-gpu-manufacturer\", \"${domain}/instance-gpu-count\", \"${domain}/instance-gpu-memory\", \"${domain}/instance-accelerator-name\", \"${domain}/instance-accelerator-manufacturer\", \"${domain}/instance-accelerator-count\"] || !x.find(\"^([^/]+)\").endsWith(\"${domain}\"))"
message="label domain \"${domain}\" is restricted"
MSG="${message}" RULE="${rule}" yq eval '.spec.versions[0].schema.openAPIV3Schema.properties.spec.properties.template.properties.metadata.properties.labels.x-kubernetes-validations += [{"message": strenv(MSG), "rule": strenv(RULE)}]' -i pkg/apis/crds/karpenter.sh_nodepools.yaml
}
2 changes: 1 addition & 1 deletion hack/validation/requirements.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

function injectDomainRequirementRestrictions() {
domain=$1
rule="self in [\"${domain}/capacity-reservation-id\", \"${domain}/ec2nodeclass\", \"${domain}/instance-encryption-in-transit-supported\", \"${domain}/instance-category\", \"${domain}/instance-hypervisor\", \"${domain}/instance-family\", \"${domain}/instance-generation\", \"${domain}/instance-local-nvme\", \"${domain}/instance-size\", \"${domain}/instance-cpu\", \"${domain}/instance-cpu-manufacturer\", \"${domain}/instance-cpu-sustained-clock-speed-mhz\", \"${domain}/instance-memory\", \"${domain}/instance-ebs-bandwidth\", \"${domain}/instance-network-bandwidth\", \"${domain}/instance-gpu-name\", \"${domain}/instance-gpu-manufacturer\", \"${domain}/instance-gpu-count\", \"${domain}/instance-gpu-memory\", \"${domain}/instance-accelerator-name\", \"${domain}/instance-accelerator-manufacturer\", \"${domain}/instance-accelerator-count\"] || !self.find(\"^([^/]+)\").endsWith(\"${domain}\")"
rule="self in [\"${domain}/capacity-reservation-type\", \"${domain}/capacity-reservation-id\", \"${domain}/ec2nodeclass\", \"${domain}/instance-encryption-in-transit-supported\", \"${domain}/instance-category\", \"${domain}/instance-hypervisor\", \"${domain}/instance-family\", \"${domain}/instance-generation\", \"${domain}/instance-local-nvme\", \"${domain}/instance-size\", \"${domain}/instance-cpu\", \"${domain}/instance-cpu-manufacturer\", \"${domain}/instance-cpu-sustained-clock-speed-mhz\", \"${domain}/instance-memory\", \"${domain}/instance-ebs-bandwidth\", \"${domain}/instance-network-bandwidth\", \"${domain}/instance-gpu-name\", \"${domain}/instance-gpu-manufacturer\", \"${domain}/instance-gpu-count\", \"${domain}/instance-gpu-memory\", \"${domain}/instance-accelerator-name\", \"${domain}/instance-accelerator-manufacturer\", \"${domain}/instance-accelerator-count\"] || !self.find(\"^([^/]+)\").endsWith(\"${domain}\")"
message="label domain \"${domain}\" is restricted"
MSG="${message}" RULE="${rule}" yq eval '.spec.versions[0].schema.openAPIV3Schema.properties.spec.properties.requirements.items.properties.key.x-kubernetes-validations += [{"message": strenv(MSG), "rule": strenv(RULE)}]' -i pkg/apis/crds/karpenter.sh_nodeclaims.yaml
MSG="${message}" RULE="${rule}" yq eval '.spec.versions[0].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.requirements.items.properties.key.x-kubernetes-validations += [{"message": strenv(MSG), "rule": strenv(RULE)}]' -i pkg/apis/crds/karpenter.sh_nodepools.yaml
Expand Down
Loading
Loading