Skip to content

Commit 9ee01b9

Browse files
authored
Merge branch 'main' into feat/vm-brownfield-overrides
2 parents 2255c7b + 974efa8 commit 9ee01b9

20 files changed

Lines changed: 943 additions & 153 deletions

File tree

docs/architecture.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,20 @@ This phase uses four modules, typically applied together:
107107

108108
**Provider dependencies**: `rancher/rancher2 ~> 3.0`
109109

110+
#### 2e. namespace-credential-provisioner (`modules/management/namespace-credential-provisioner`)
111+
112+
- Deploys a long-running reconciler on the Harvester cluster that watches for tenant namespaces.
113+
- For each namespace, automatically creates a scoped ServiceAccount, RoleBindings, and a
114+
`harvester-vm-kubeconfig` Secret that consumer teams use to authenticate the `harvester`
115+
Terraform provider — no admin involvement, no file handover.
116+
- Backfills existing namespaces on startup (safe to deploy to running clusters).
117+
- Cleans up cross-namespace RoleBindings when a namespace is deleted.
118+
119+
**Must be deployed before `tenant-space` creates namespaces** so that credentials are
120+
ready by the time consumer teams run `terraform apply`.
121+
122+
**Provider dependencies**: `hashicorp/kubernetes >= 2.0`
123+
110124
---
111125

112126
### Phase 3 — Identity & Monitoring
@@ -203,7 +217,13 @@ This phase configures the `asgardeo` provider (or equivalent OIDC configuration
203217
204218
│ projects/namespaces ready
205219
206-
┌───────────────────┐
220+
┌─────────────────────────┐
221+
│ namespace-credential- │
222+
│ provisioner (Phase 2e) │
223+
└──────────┬──────────────┘
224+
│ harvesterconfig + harvester-vm-kubeconfig per namespace
225+
226+
┌───────────────────┐
207227
│ identity │
208228
│ (Phase 3a) │
209229
└────────┬──────────┘
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# Module: management/namespace-credential-provisioner
2+
3+
Deploys a long-running reconciler on the Harvester cluster that automatically provisions
4+
credentials in every tenant namespace. This is a required part of the management phase —
5+
deploy it after `harvester-integration` and before creating tenant workloads.
6+
7+
## What it does
8+
9+
For every namespace labelled as a tenant namespace, the provisioner creates:
10+
11+
1. `harvester-vm-access-<ns>` ServiceAccount with scoped RoleBindings:
12+
- `harvesterhci.io:edit` in the tenant namespace (VM lifecycle)
13+
- `edit` in the tenant namespace (generic Kubernetes resources)
14+
- `harvesterhci.io:view` in `harvester-public` (read shared OS images)
15+
2. A long-lived SA token Secret
16+
3. `harvester-vm-kubeconfig` Secret in the namespace — a namespace-scoped kubeconfig
17+
consumers use to authenticate the `harvester` Terraform provider
18+
19+
On startup the provisioner backfills any existing namespaces that are missing the
20+
`harvester-vm-kubeconfig` Secret (upgrade path).
21+
22+
On namespace deletion it cleans up the cross-namespace `harvester-public` RoleBinding.
23+
24+
## Why this matters
25+
26+
Without the provisioner, consumer teams cannot authenticate to Harvester to create VMs.
27+
The alternative — handing out admin kubeconfigs or running per-team credential setup
28+
manually — does not scale and creates security exposure. This provisioner eliminates
29+
both problems: credentials are created automatically, scoped per namespace, and revoked
30+
automatically when the namespace is deleted.
31+
32+
## Deployment sequence
33+
34+
```text
35+
Phase 2a harvester-integration — registers Harvester with Rancher
36+
Phase 2e namespace-credential-provisioner ← deploy here
37+
tenant-space — creates namespaces; provisioner reacts immediately
38+
```
39+
40+
The provisioner must be running before `tenant-space` creates namespaces so that
41+
`harvester-vm-kubeconfig` is ready by the time consumer teams run `terraform apply`.
42+
43+
## Usage
44+
45+
```hcl
46+
module "provisioner" {
47+
source = "github.com/wso2/open-cloud-datacenter//modules/management/namespace-credential-provisioner?ref=v0.8.0"
48+
49+
providers = {
50+
kubernetes = kubernetes.harvester
51+
}
52+
53+
harvester_api_server = "https://192.168.10.6:6443"
54+
rancher_kubeconfig = file(var.rancher_kubeconfig_path)
55+
56+
depends_on = [module.harvester_integration]
57+
}
58+
```
59+
60+
## Inputs
61+
62+
| Name | Description | Type | Default | Required |
63+
|------|-------------|------|---------|----------|
64+
| `harvester_api_server` | Harvester Kubernetes API server URL (e.g. `https://192.168.10.6:6443`) | `string` || yes |
65+
| `rancher_kubeconfig` | Kubeconfig for the Rancher cluster. Used to write `harvesterconfig` secrets into `fleet-default`. | `string` || yes |
66+
| `namespace` | Namespace to deploy the provisioner into | `string` | `"kube-system"` | no |
67+
| `image` | Container image for the provisioner (needs `kubectl`, `bash`, `jq`) | `string` | `"alpine/k8s:1.32.3"` | no |
68+
69+
## Outputs
70+
71+
| Name | Description |
72+
|------|-------------|
73+
| `deployment_name` | Name of the provisioner Deployment |
74+
| `service_account_name` | ServiceAccount used by the provisioner pod |
75+
76+
## Security
77+
78+
The provisioner SA has cluster-wide namespace watch and cross-namespace write access for
79+
ServiceAccounts, Secrets, and RoleBindings — this is the minimum required to manage
80+
credentials across all tenant namespaces. The credentials it creates are namespace-scoped:
81+
each `harvester-vm-access-<ns>` SA can only act within its own namespace (plus read-only
82+
access to `harvester-public` for shared images).
83+
84+
One project per team is strongly recommended. Within a shared project, namespace isolation
85+
is enforced by the SA RoleBindings — not Rancher project RBAC — so consumers cannot
86+
cross namespace boundaries even if they share a project.
87+
88+
## Relation to `harvester-cloud-credential`
89+
90+
`workloads/harvester-cloud-credential` is deprecated. It served the same purpose
91+
(creating per-cluster Harvester credentials) but required manual invocation per cluster.
92+
The provisioner replaces it for all greenfield deployments. Retain the module only for
93+
brownfield clusters that have existing credentials that cannot be migrated.

modules/management/namespace-credential-provisioner/scripts/reconcile.sh

Lines changed: 156 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,26 @@
66
# 1. Namespace watch (Harvester)
77
# Watches tenant namespaces on the Harvester cluster. For each new namespace
88
# that belongs to a Rancher project it creates:
9+
#
10+
# Cloud-provider credentials (for RKE2 Harvester cloud provider):
911
# - ServiceAccount harvester-cloud-provider-<ns>
1012
# - RoleBinding to harvesterhci.io:cloudprovider in the tenant namespace
1113
# - Optional RoleBinding to view in the project's network namespace
1214
# - Long-lived SA token secret
15+
#
16+
# Consumer VM-access credentials (for tenant teams provisioning VMs/clusters):
17+
# - ServiceAccount harvester-vm-access-<ns>
18+
# - RoleBinding to harvesterhci.io:edit in the tenant namespace
19+
# - RoleBinding to edit (k8s built-in) in the tenant namespace
20+
# - RoleBinding to view in harvester-public (for shared OS images)
21+
# - Long-lived SA token secret
22+
# - Secret "harvester-vm-kubeconfig" in the tenant namespace containing a
23+
# namespace-scoped kubeconfig. The platform team retrieves this once at
24+
# onboarding and hands it to the tenant team.
25+
#
1326
# On namespace deletion it deletes any harvesterconfig-* secrets on Rancher
14-
# whose kubeconfig was built from that namespace's SA token.
27+
# whose kubeconfig was built from that namespace's SA token, and cleans up
28+
# the harvester-public RoleBinding for the VM-access SA.
1529
#
1630
# 2. Cluster watch (Rancher)
1731
# Watches clusters.provisioning.cattle.io on the Rancher cluster. For each
@@ -22,9 +36,6 @@
2236
# v2prov-secret-authorized-for-cluster already set at creation time
2337
# On cluster deletion it removes harvesterconfig-<cluster-name>.
2438
#
25-
# Consumers (tenant teams) only need the rancher2 provider. No Harvester or
26-
# Rancher kubeconfig required on their side.
27-
#
2839
# Environment variables (injected by the Deployment):
2940
# HARVESTER_API_SERVER — Harvester Kubernetes API server URL
3041
# RANCHER_KUBECONFIG — Path to kubeconfig for Rancher's local cluster
@@ -131,6 +142,114 @@ $(echo "$kubeconfig" | sed 's/^/ /')
131142
EOF
132143
}
133144

145+
# Build a namespace-scoped VM-access kubeconfig for tenant teams and write it
146+
# as the well-known "harvester-vm-kubeconfig" Secret in the tenant namespace.
147+
# Consumers retrieve this secret once at onboarding to provision VMs and RKE2
148+
# clusters using the workloads/vm and workloads/k8s-cluster OCD modules.
149+
# Args: ns
150+
write_vm_kubeconfig() {
151+
local ns="$1"
152+
local sa_name="harvester-vm-access-${ns}"
153+
local secret_name="harvester-vm-kubeconfig"
154+
155+
# ServiceAccount in tenant namespace.
156+
kubectl create serviceaccount "$sa_name" -n "$ns" \
157+
--dry-run=client -o yaml | kubectl apply -f -
158+
159+
# RoleBinding — Harvester VM lifecycle (VMs, keypairs, images, NADs, backups).
160+
kubectl create rolebinding "${sa_name}" \
161+
--clusterrole=harvesterhci.io:edit \
162+
--serviceaccount="${ns}:${sa_name}" \
163+
-n "$ns" --dry-run=client -o yaml | kubectl apply -f -
164+
165+
# RoleBinding — Kubernetes resource edit (Secrets, PVCs, ConfigMaps).
166+
kubectl create rolebinding "${sa_name}-k8s-edit" \
167+
--clusterrole=edit \
168+
--serviceaccount="${ns}:${sa_name}" \
169+
-n "$ns" --dry-run=client -o yaml | kubectl apply -f -
170+
171+
# RoleBinding — read shared OS images in default namespace.
172+
kubectl create rolebinding "${ns}-${sa_name}-default-view" \
173+
--clusterrole=view \
174+
--serviceaccount="${ns}:${sa_name}" \
175+
-n "default" --dry-run=client -o yaml | kubectl apply -f -
176+
177+
# RoleBinding — read shared OS images in harvester-public.
178+
kubectl create rolebinding "${ns}-${sa_name}-public-view" \
179+
--clusterrole=view \
180+
--serviceaccount="${ns}:${sa_name}" \
181+
-n "harvester-public" --dry-run=client -o yaml | kubectl apply -f -
182+
183+
# Long-lived token secret.
184+
kubectl apply -f - <<EOF
185+
apiVersion: v1
186+
kind: Secret
187+
metadata:
188+
name: ${sa_name}-token
189+
namespace: ${ns}
190+
annotations:
191+
kubernetes.io/service-account.name: ${sa_name}
192+
type: kubernetes.io/service-account-token
193+
EOF
194+
195+
# Wait for the token to be populated by the token controller.
196+
local token=""
197+
for _ in $(seq 1 20); do
198+
token=$(kubectl get secret "${sa_name}-token" -n "$ns" \
199+
-o jsonpath='{.data.token}' 2>/dev/null || true)
200+
[[ -n "$token" ]] && break
201+
sleep 1
202+
done
203+
if [[ -z "$token" ]]; then
204+
log " ERROR: VM access token not populated for ${sa_name} in ${ns}"
205+
return 1
206+
fi
207+
208+
local token_decoded ca_cert_b64
209+
token_decoded=$(echo "$token" | base64 -d)
210+
ca_cert_b64=$(kubectl get configmap kube-root-ca.crt -n kube-system \
211+
-o jsonpath='{.data.ca\.crt}' | base64 | tr -d '\n')
212+
213+
local kubeconfig
214+
kubeconfig=$(cat <<EOF
215+
apiVersion: v1
216+
kind: Config
217+
clusters:
218+
- name: harvester
219+
cluster:
220+
certificate-authority-data: ${ca_cert_b64}
221+
server: ${HARVESTER_API_SERVER}
222+
users:
223+
- name: ${ns}
224+
user:
225+
token: ${token_decoded}
226+
contexts:
227+
- name: ${ns}@harvester
228+
context:
229+
cluster: harvester
230+
namespace: ${ns}
231+
user: ${ns}
232+
current-context: ${ns}@harvester
233+
EOF
234+
)
235+
236+
kubectl apply -f - <<EOF
237+
apiVersion: v1
238+
kind: Secret
239+
metadata:
240+
name: ${secret_name}
241+
namespace: ${ns}
242+
annotations:
243+
platform.wso2.com/vm-access-sa: "${sa_name}"
244+
type: Opaque
245+
stringData:
246+
kubeconfig: |
247+
$(echo "$kubeconfig" | sed 's/^/ /')
248+
EOF
249+
250+
log " [ns] VM access kubeconfig ready: ${secret_name} in ${ns}"
251+
}
252+
134253
# ── Namespace watch handlers ───────────────────────────────────────────────────
135254

136255
on_added_namespace() {
@@ -179,6 +298,11 @@ type: kubernetes.io/service-account-token
179298
EOF
180299

181300
log " [ns] SA ready: ${sa_name} in ${ns}"
301+
302+
# Consumer VM-access kubeconfig — separate SA with broader permissions.
303+
# Explicit return propagates failure to the caller so the namespace is NOT
304+
# marked processed; the watch loop will retry on the next event.
305+
write_vm_kubeconfig "$ns" || return 1
182306
}
183307

184308
on_deleted_namespace() {
@@ -216,6 +340,14 @@ on_deleted_namespace() {
216340
kubectl delete rolebinding "$rb_name_found" -n "$rb_ns" 2>/dev/null \
217341
&& log " [ns] deleted rolebinding ${rb_name_found} from ${rb_ns}"
218342
done || true
343+
344+
# Delete the VM-access SA's cross-namespace RoleBindings.
345+
# (Resources inside the deleted namespace are cleaned up by Kubernetes.)
346+
local vm_sa_name="harvester-vm-access-${ns}"
347+
kubectl delete rolebinding "${ns}-${vm_sa_name}-default-view" -n "default" \
348+
2>/dev/null && log " [ns] deleted default RoleBinding for ${vm_sa_name}" || true
349+
kubectl delete rolebinding "${ns}-${vm_sa_name}-public-view" -n "harvester-public" \
350+
2>/dev/null && log " [ns] deleted harvester-public RoleBinding for ${vm_sa_name}" || true
219351
}
220352

221353
# ── Cluster watch handlers ─────────────────────────────────────────────────────
@@ -408,9 +540,26 @@ kubectl get namespaces -o json | jq -r '
408540
[[ -z "$project_id" ]] && continue
409541
is_system_namespace "$ns" && continue
410542
[[ "$role" == "network-namespace" ]] && continue
411-
log "INIT namespace: ${ns} (project: ${project_id})"
412-
if on_added_namespace "$ns" "$project_id"; then
413-
echo "$ns" >> "$PROCESSED_NS_FILE"
543+
sa_name="harvester-cloud-provider-${ns}"
544+
if kubectl get secret "${sa_name}-token" -n "$ns" &>/dev/null; then
545+
# Cloud-provider credentials already exist. Check for the VM-access kubeconfig
546+
# separately — may be absent on pods that ran before this feature was added.
547+
if kubectl get secret "harvester-vm-kubeconfig" -n "$ns" &>/dev/null; then
548+
log "INIT namespace: ${ns} — already provisioned, skipping"
549+
echo "$ns" >> "$PROCESSED_NS_FILE"
550+
else
551+
log "INIT namespace: ${ns} — backfilling VM access kubeconfig"
552+
if write_vm_kubeconfig "$ns"; then
553+
echo "$ns" >> "$PROCESSED_NS_FILE"
554+
else
555+
log " WARN: VM access kubeconfig backfill failed for ${ns} — will retry on next watch event"
556+
fi
557+
fi
558+
else
559+
log "INIT namespace: ${ns} (project: ${project_id})"
560+
if on_added_namespace "$ns" "$project_id"; then
561+
echo "$ns" >> "$PROCESSED_NS_FILE"
562+
fi
414563
fi
415564
done
416565

0 commit comments

Comments
 (0)