Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 97 additions & 0 deletions v1.33/giantswarm/PRODUCT.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Kubernetes AI Conformance Checklist
# Notes: This checklist is based on the Kubernetes AI Conformance document.
# Participants should fill in the 'status', 'evidence', and 'notes' fields for each requirement.

metadata:
kubernetesVersion: v1.33
platformName: Giant Swarm Platform
platformVersion: 1.33.0
vendorName: Giant Swarm
websiteUrl: https://www.giantswarm.io/
documentationUrl: https://docs.giantswarm.io/
productLogoUrl: https://www.giantswarm.io/assets/img/logo.svg
description: "Giant Swarm Platform is an enterprise-grade managed Kubernetes platform for containerized applications, including stateful and stateless, AI and ML, Linux and Windows, complex and simple web apps, API, and backend services."

spec:
accelerators:
- id: dra_support
state: TBD
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

state field isn't needed https://github.com/cncf/ai-conformance/blob/main/docs/AIConformance-1.33.yaml You can just remove them. I see you have the status field filled out.

description: "Support Dynamic Resource Allocation (DRA) APIs to enable more flexible and fine-grained resource requests beyond simple counts."
level: SHOULD
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/fleet-management/cluster-management/dynamic-resource-allocation/"
notes: ""
networking:
- id: ai_inference
state: TBD
description: "Support the Kubernetes Gateway API with an implementation for advanced traffic management for inference services, which enables capabilities like weighted traffic splitting, header-based routing (for OpenAI protocol headers), and optional integration with service meshes."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/connectivity/gateway-api/"
notes: ""
schedulingOrchestration:
- id: gang_scheduling
state: TBD
description: "The platform must allow for the installation and successful operation of at least one gang scheduling solution that ensures all-or-nothing scheduling for distributed AI workloads (e.g. Kueue, Volcano, etc.) To be conformant, the vendor must demonstrate that their platform can successfully run at least one such solution."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/fleet-management/job-management/kueue/"
notes: ""
- id: cluster_autoscaling
state: TBD
description: "If the platform provides a cluster autoscaler or an equivalent mechanism, it must be able to scale up/down node groups containing specific accelerator types based on pending pods requesting those accelerators."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/fleet-management/cluster-management/aws-cluster-scaling/"
- "https://docs.giantswarm.io/tutorials/fleet-management/cluster-management/cluster-autoscaler/"
- "https://karpenter.sh/docs/concepts/scheduling/#acceleratorsgpu-resources"
notes: ""
- id: pod_autoscaling
state: TBD
description: "If the platform supports the HorizontalPodAutoscaler, it must function correctly for pods utilizing accelerators. This includes the ability to scale these Pods based on custom metrics relevant to AI/ML workloads."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/fleet-management/scaling-workloads/scaling-based-on-custom-metrics"
notes: ""
observability:
- id: accelerator_metrics
state: TBD
description: "For supported accelerator types, the platform must allow for the installation and successful operation of at least one accelerator metrics solution that exposes fine-grained performance metrics via a standardized, machine-readable metrics endpoint. This must include a core set of metrics for per-accelerator utilization and memory usage. Additionally, other relevant metrics such as temperature, power draw, and interconnect bandwidth should be exposed if the underlying hardware or virtualization layer makes them available. The list of metrics should align with emerging standards, such as OpenTelemetry metrics, to ensure interoperability. The platform may provide a managed solution, but this is not required for conformance."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/fleet-management/cluster-management/gpu/#monitoring"
- "https://docs.giantswarm.io/overview/observability/configuration/"
notes: ""
- id: ai_service_metrics
state: TBD
description: "Provide a monitoring system capable of discovering and collecting metrics from workloads that expose them in a standard format (e.g. Prometheus exposition format). This ensures easy integration for collecting key metrics from common AI frameworks and servers."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/getting-started/observe-your-clusters-and-apps/"
- "https://docs.giantswarm.io/overview/observability/data-management/data-ingestion/"
notes: ""
security:
- id: secure_accelerator_access
state: TBD
description: "Ensure that access to accelerators from within containers is properly isolated and mediated by the Kubernetes resource management framework (device plugin or DRA) and container runtime, preventing unauthorized access or interference between workloads."
level: MUST
status: "Implemented"
evidence:
- "secure_accelerator_access_tests.md"
notes: ""
operator:
- id: robust_controller
state: TBD
description: "The platform must prove that at least one complex AI operator with a CRD (e.g., Ray, Kubeflow) can be installed and functions reliably. This includes verifying that the operator's pods run correctly, its webhooks are operational, and its custom resources can be reconciled."
level: MUST
status: "Implemented"
evidence:
- "https://docs.giantswarm.io/tutorials/fleet-management/job-management/kuberay"
notes: ""
275 changes: 275 additions & 0 deletions v1.33/giantswarm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,275 @@
### Giant Swarm Platform

Giant Swarm Platform is a managed Kubernetes platform developed by [Giant Swarm](https://www.giantswarm.io).

### How to Reproduce

#### Create Cluster

First access the [Giant Swarm Platform](https://docs.giantswarm.io/getting-started/), and login to platform API.
After successful login, select [Create a cluster](https://docs.giantswarm.io/getting-started/provision-your-first-workload-cluster/) with the specific DRA values.

```yaml
global:
connectivity:
availabilityZoneUsageLimit: 3
network: {}
topology: {}
controlPlane: {}
metadata:
name: $CLUSTER
$organization: fer
preventDeletion: false
nodePools:
nodepool0:
instanceType: m5.xlarge
maxSize: 2
minSize: 1
rootVolumeSizeGB: 8
nodepool1:
instanceType: p4d.24xlarge
maxSize: 2
minSize: 1
rootVolumeSizeGB: 15
instanceWarmup: 600
minHealthyPercentage: 90
customNodeTaints:
- key: "nvidia.com/gpu"
value: "Exists"
effect: "NoSchedule"
providerSpecific: {}
release:
version: 33.0.0
cluster:
internal:
advancedConfiguration:
controlPlane:
apiServer:
featureGates:
- name: DynamicResourceAllocation
enabled: true
controllerManager:
featureGates:
- name: DynamicResourceAllocation
enabled: true
scheduler:
featureGates:
- name: DynamicResourceAllocation
enabled: true
kubelet:
featureGates:
- name: DynamicResourceAllocation
enabled: true
```

# AI platform components

The following components should be installed to complete the AI setup:

## 1. NVIDIA GPU Operator

**Purpose**: Manages NVIDIA GPU resources in Kubernetes clusters.

**Installation via Giant Swarm App Platform**:

```sh
kubectl gs template app \
--catalog giantswarm \
--name gpu-operator \
--cluster-name $CLUSTER \
--target-namespace kube-system \
--version 1.0.1 \
--organization $ORGANIZATION | kubectl apply -f -
```

## 2. NVIDIA DRA Driver GPU

**Purpose**: Provides Dynamic Resource Allocation (DRA) support for NVIDIA GPUs.

**Installation via Flux HelmRelease**:

```sh
# First create the NVIDIA Helm Repository
kubectl apply -f - <<EOF
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: nvidia
namespace: org-$ORGANIZATION
spec:
interval: 1h
url: https://helm.ngc.nvidia.com/nvidia
EOF

# Then create the HelmRelease
kubectl apply -f - <<EOF
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: $CLUSTER-nvidia-dra-driver-gpu
namespace: org-$ORGANIZATION
spec:
interval: 5m
chart:
spec:
chart: nvidia-dra-driver-gpu
version: "25.3.0"
sourceRef:
kind: HelmRepository
name: nvidia
targetNamespace: kube-system
kubeConfig:
secretRef:
name: $CLUSTER-kubeconfig
key: value
values:
nvidiaDriverRoot: "/"
resources:
gpus:
enabled: false
EOF
```

## 3. Kuberay Operator

**Purpose**: Manages Ray clusters for distributed AI/ML workloads.

**Installation via Giant Swarm App Platform**:

```sh
kubectl gs template app \
--catalog giantswarm \
--name kuberay-operator \
--cluster-name $CLUSTER \
--target-namespace kube-system \
--version 1.0.0 \
--organization $ORGANIZATION | kubectl apply -f -
```

## 4. Kueue

**Purpose**: Provides job queueing and resource management for batch workloads.

**Installation via Flux HelmRelease**:

```sh
# First create the Kueue Helm Repository
kubectl gs template app \
--catalog=giantswarm \
--cluster-name$CLUSTER\
--organization=ORGANIZATION \
--name=kueue \
--target-namespace=kueue-system \
--version=0.1.0 | kubectl apply -f -
```

## 5. Gateway API

**Purpose**: Provides advanced traffic management capabilities for inference services.

**Installation via Giant Swarm App Platform**:

```sh
kubectl gs template app \
--catalog giantswarm \
--name gateway-api-bundle \
--cluster-name $CLUSTER \
--target-namespace kube-system \
--version 0.5.1 \
--organization $ORGANIZATION | kubectl apply -f -
```

## 6. AWS EFS CSI Driver

**Purpose**: Enables persistent storage using AWS Elastic File System for shared AI model storage.

**Installation via Giant Swarm App Platform**:

```sh
kubectl gs template app \
--catalog giantswarm \
--name aws-efs-csi-driver \
--cluster-name $CLUSTER \
--target-namespace kube-system \
--version 2.1.5 \
--organization $ORGANIZATION | kubectl apply -f -
```

## 7. JobSet

**Purpose**: Manages sets of Jobs for distributed training workloads.

**Installation via Flux HelmRelease**:

```sh
kubectl apply -f - <<EOF
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: $CLUSTER-jobset
namespace: org-$ORGANIZATION
spec:
interval: 5m
chart:
spec:
chart: oci://registry.k8s.io/jobset/charts/jobset
version: "0.10.1"
targetNamespace: kube-system
kubeConfig:
secretRef:
name: $CLUSTER-kubeconfig
key: value
EOF
```

## 9. Prometheus Adapter

**Purpose**: Enables custom metrics for Horizontal Pod Autoscaler, including AI/ML specific metrics.

**Installation via Flux HelmRelease**:

```sh
kubectl gs template app \
--catalog=giantswarm \
--cluster-name=$CLUSTER \
--org $ORGANIZATION \
--name=keda \
--target-namespace=keda-system \
--version=3.1.0 | kubectl apply -f -
```

## 10. Sonobuoy Configuration

**Purpose**: Applies PolicyExceptions and configurations needed for AI conformance testing.

**Installation**: Applied directly to the workload cluster using the kubeconfig:

```sh
# Download and apply the configuration
kubectl --kubeconfig=/path/to/workload-cluster-kubeconfig apply -f https://gist.githubusercontent.com/pipo02mix/80415c1182a5920af46a85c7adf90a8a/raw/d75d7593194fb2a3beba0549f946cb6f8a5a5f46/sonobuoy-rews.yaml
```

All these components work together to provide a complete AI/ML platform on Kubernetes with GPU support, workload management, monitoring, and conformance testing capabilities.

#### Run conformance Test by Sonobuoy

Login to the control-plane of the cluster created by Giant Swarm Platform.

Start the conformance tests:

```sh
sonobuoy run --plugin https://raw.githubusercontent.com/pipo02mix/ai-conformance/c0f5f45e131445e1cf833276ca66e251b1b200e9/sonobuoy-plugin.yaml
````

Monitor the conformance tests by tracking the sonobuoy logs, and wait for the line: "no-exit was specified, sonobuoy is now blocking"

```sh
stern -n sonobuoy sonobuoy
```

Retrieve result:

```sh
outfile=$(sonobuoy retrieve)
sonobuoy results $outfile
```
Loading