Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion chart/crds/network.harvesterhci.io_ippools.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ spec:
type: object
status:
properties:
agentPodRef:
agentDeploymentRef:
properties:
image:
type: string
Expand Down
18 changes: 9 additions & 9 deletions chart/templates/rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ rules:
- apiGroups: [ "" ]
resources: [ "namespaces" ]
verbs: [ "get", "watch", "list" ]
- apiGroups: [ "" ]
resources: [ "pods" ]
verbs: [ "watch", "list" ]
- apiGroups: [ "apps" ]
resources: [ "deployments" ]
verbs: [ "get", "watch", "list" ]
- apiGroups: [ "kubevirt.io" ]
resources: [ "virtualmachines" ]
verbs: [ "get", "watch", "list" ]
Expand Down Expand Up @@ -122,11 +122,11 @@ rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "harvester-vm-dhcp-controller.name" . }}-pod-manager
name: {{ include "harvester-vm-dhcp-controller.name" . }}-deployment-manager
rules:
- apiGroups: [ "" ]
resources: [ "pods" ]
verbs: [ "get", "create", "delete" ]
- apiGroups: [ "apps" ]
resources: [ "deployments" ]
verbs: [ "get", "create", "update", "delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
Expand Down Expand Up @@ -157,13 +157,13 @@ subjects:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "harvester-vm-dhcp-controller.name" . }}-manage-pods
name: {{ include "harvester-vm-dhcp-controller.name" . }}-manage-deployments
labels:
{{- include "harvester-vm-dhcp-controller.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "harvester-vm-dhcp-controller.name" . }}-pod-manager
name: {{ include "harvester-vm-dhcp-controller.name" . }}-deployment-manager
subjects:
- kind: ServiceAccount
name: {{ include "harvester-vm-dhcp-controller.serviceAccountName" . }}
Expand Down
129 changes: 129 additions & 0 deletions docs/ippool-agent-deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
name: ippool-agent-deployment-reconcile
description: Plan to move IPPool agents from Pods to Deployments and reconcile via operator pattern.
---

# Plan

Obiettivo: passare da agent Pod a Deployment per ogni IPPool e riconciliare con pattern operator, aggiornando API, controller, RBAC e test.

## Requirements
- Ogni IPPool crea/gestisce un Deployment agent (repliche=1) con la stessa logica di rete/affinity/args attuale.
- Reconcile idempotente: crea/aggiorna/monitor/cleanup del Deployment.
- Upgrade immagine rispettando `hold-ippool-agent-upgrade`.

## Scope
- In: IPPool controller, API/CRD status, codegen, RBAC, test.
- Out: logica DHCP/IPAM/VMNetCfg non correlata.

## Files and entry points
- `pkg/controller/ippool/controller.go`
- `pkg/controller/ippool/common.go`
- `pkg/apis/network.harvesterhci.io/v1alpha1/ippool.go`
- `pkg/codegen/main.go`, `pkg/config/context.go`
- `pkg/util/fakeclient`, `pkg/controller/ippool/controller_test.go`
- `chart/templates/rbac.yaml`, `chart/crds/network.harvesterhci.io_ippools.yaml`
- `pkg/data/data.go`

## Data model / API changes
- Sostituire `status.agentPodRef` con `status.agentDeploymentRef` (nuovo `DeploymentReference`).

## Action items
[ ] Aggiungere client/controller apps/v1 Deployment (codegen + Management) e fakeclient.
[ ] Implementare `prepareAgentDeployment` e aggiornare Deploy/Monitor/Cleanup per Deployment.
[ ] Aggiornare watch `relatedresource` su Deployment con label ippool.
[ ] Aggiornare builder/status helpers e test per Deployment.
[ ] Aggiornare RBAC per `deployments` (get/list/watch/create/update/delete).
[ ] Rigenerare codegen/CRD/bindata.

## Testing and validation
- `go test ./...` (o `go test ./pkg/controller/ippool -run TestHandler_`).
- `go generate` per rigenerare CRD/client/bindata.

## Risks and edge cases
- Breaking change CRD status.
- Strategia Deployment (Recreate vs RollingUpdate) impatta continuita DHCP.
- Namespace agent != IPPool: relazione via label, non ownerRef.

## Open questions
- Posso usare `agentDeploymentRef` al posto di `agentPodRef` anche se breaking?
- Preferisci `Recreate` o `RollingUpdate` per i Deployment agent?

---

# IPPool agent su Deployment

Questo documento descrive logica e modifiche introdotte per spostare l'agent IPPool
da Pod singolo a Deployment, con riconciliazione di tipo operator e strategia
RollingUpdate.

## Obiettivo
- Ogni IPPool crea e gestisce un Deployment dedicato (repliche=1).
- Riconciliazione idempotente che crea o aggiorna il Deployment quando cambia la configurazione.
- Upgrade immagine controllato dall'annotazione `network.harvesterhci.io/hold-ippool-agent-upgrade`.
- Migrazione centrata sul nuovo riferimento di stato `agentDeploymentRef`.

## Flusso di riconciliazione
### DeployAgent
- Recupera ClusterNetwork dalla NetworkAttachmentDefinition (label `network.harvesterhci.io/clusternetwork`).
- Calcola l'immagine desiderata (rispetta l'annotazione di hold).
- Costruisce il Deployment desiderato via `prepareAgentDeployment`.
- Se `status.agentDeploymentRef` e valorizzato:
- Verifica UID e che il Deployment non sia in deletion.
- Richiede selector immutabile (errore se diverge).
- Aggiorna labels, strategy, replicas, template e container se differiscono.
- Aggiorna `agentDeploymentRef` con namespace/nome/UID.
- Se il Deployment non esiste o lo status e vuoto, crea il Deployment e registra lo status.

### MonitorAgent
- Se `noAgent` e true, non fa nulla; se IPPool e in pausa, ritorna errore.
- Verifica esistenza, UID e immagine del container rispetto allo status.
- Verifica readiness usando `ObservedGeneration` e repliche `Updated/Available`.
- In caso di mismatch o non readiness, ritorna errore (non cancella il Deployment).

### Cleanup
- Elimina il Deployment associato e pulisce IPAM/MAC/metriche.
- Usato su pause o rimozione IPPool.

## Dettagli implementativi
- Nome Deployment derivato da `util.SafeAgentConcatName`.
- Labels: `network.harvesterhci.io/vm-dhcp-controller=agent` + labels IPPool namespace/nome.
- Pod template con annotazione Multus `k8s.v1.cni.cncf.io/networks`.
- Init container `ip-setter` per configurare `eth1`; container `agent` con probes `/healthz` e `/readyz`.
- Container defaults (ImagePullPolicy e TerminationMessage*) esplicitati per evitare reconcile loop.
- Strategia RollingUpdate: `maxUnavailable=0`, `maxSurge=1`.

## Cambiamenti di stato/CRD
- `status.agentPodRef` sostituito da `status.agentDeploymentRef`.
- Nuova struct `DeploymentReference` con namespace, name, image, UID.
- CRD aggiornata e bindata rigenerata.

## Watch e relazione risorse
- Watch su Deployment con label `vm-dhcp-controller=agent`.
- Mapping IPPool tramite label `ippool-namespace` e `ippool-name`.

## RBAC
- Aggiunti permessi `deployments` per controller (get/list/watch).
- Role dedicato `*-deployment-manager` con create/update/delete.
- Binding aggiornato da `manage-pods` a `manage-deployments`.

## Codegen e generated
- Aggiunto gruppo apps/v1 alla codegen per controller/cache Deployment.
- Nuovi controller generati in `pkg/generated/controllers/apps`.
- Nuovo fake client per Deployment in `pkg/util/fakeclient/deployment.go`.

## Modifiche al codice (file principali)
- `pkg/controller/ippool/controller.go`: DeployAgent/MonitorAgent/cleanup aggiornati per Deployment.
- `pkg/controller/ippool/common.go`: `prepareAgentDeployment` e builder di supporto.
- `pkg/apis/network.harvesterhci.io/v1alpha1/ippool.go`: nuovo status field e tipo reference.
- `pkg/config/context.go`: AppsFactory aggiunta al bootstrap.
- `pkg/codegen/main.go`: codegen apps/v1 e rigenerazione dei controller.
- `chart/templates/rbac.yaml` e `chart/crds/network.harvesterhci.io_ippools.yaml`: RBAC/CRD aggiornate.

## Compatibilita e migrazione
- `status.agentPodRef` non e piu letto; eventuali Pod legacy non sono piu gestiti.
- RollingUpdate con `maxSurge=1` puo generare un secondo agent temporaneo.
- Se esiste un Deployment con selector differente, la reconcile fallisce per evitare adozioni errate.

## Test
- Eseguito `go test ./...`.
4 changes: 2 additions & 2 deletions pkg/apis/network.harvesterhci.io/v1alpha1/ippool.go
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ type IPPoolStatus struct {

// +optional
// +kubebuilder:validation:Optional
AgentPodRef *PodReference `json:"agentPodRef,omitempty"`
AgentDeploymentRef *DeploymentReference `json:"agentDeploymentRef,omitempty"`

// +optional
// +kubebuilder:validation:Optional
Expand All @@ -132,7 +132,7 @@ type IPv4Status struct {
Available int `json:"available"`
}

type PodReference struct {
type DeploymentReference struct {
Namespace string `json:"namespace,omitempty"`
Name string `json:"name,omitempty"`
Image string `json:"image,omitempty"`
Expand Down
38 changes: 19 additions & 19 deletions pkg/apis/network.harvesterhci.io/v1alpha1/zz_generated_deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 6 additions & 0 deletions pkg/codegen/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
controllergen "github.com/rancher/wrangler/v3/pkg/controller-gen"
"github.com/rancher/wrangler/v3/pkg/controller-gen/args"
"github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
kubevirtv1 "kubevirt.io/api/core/v1"
)
Expand Down Expand Up @@ -48,6 +49,11 @@ func main() {
corev1.Pod{},
},
},
appsv1.SchemeGroupVersion.Group: {
Types: []interface{}{
appsv1.Deployment{},
},
},
cniv1.SchemeGroupVersion.Group: {
Types: []interface{}{
cniv1.NetworkAttachmentDefinition{},
Expand Down
9 changes: 9 additions & 0 deletions pkg/config/context.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ import (
"github.com/harvester/vm-dhcp-controller/pkg/cache"
"github.com/harvester/vm-dhcp-controller/pkg/crd"
"github.com/harvester/vm-dhcp-controller/pkg/dhcp"
ctlapps "github.com/harvester/vm-dhcp-controller/pkg/generated/controllers/apps"
ctlcore "github.com/harvester/vm-dhcp-controller/pkg/generated/controllers/core"
ctlcni "github.com/harvester/vm-dhcp-controller/pkg/generated/controllers/k8s.cni.cncf.io"
ctlkubevirt "github.com/harvester/vm-dhcp-controller/pkg/generated/controllers/kubevirt.io"
Expand Down Expand Up @@ -96,6 +97,7 @@ type Management struct {

HarvesterNetworkFactory *ctlnetwork.Factory

AppsFactory *ctlapps.Factory
CniFactory *ctlcni.Factory
CoreFactory *ctlcore.Factory
KubeVirtFactory *ctlkubevirt.Factory
Expand Down Expand Up @@ -169,6 +171,13 @@ func SetupManagement(ctx context.Context, restConfig *rest.Config, options *Cont
management.CoreFactory = core
management.starters = append(management.starters, core)

apps, err := ctlapps.NewFactoryFromConfigWithOptions(restConfig, opts)
if err != nil {
return nil, err
}
management.AppsFactory = apps
management.starters = append(management.starters, apps)

cni, err := ctlcni.NewFactoryFromConfigWithOptions(restConfig, opts)
if err != nil {
return nil, err
Expand Down
Loading