Open
Description
What steps did you take:
We install some Cronjobs in remote clusters via the kapp-controller.
Our Setup: oci-bundle -> appCR -> kapp-controller -> remote cluster
What happened:
We updates spec.jobTemplate.spec.template.spec.initContainers[0]
i added .securityContext.runAsUser
But the controller did not find the change, and said no diffs.
For testing purpose we did the same with spec.jobTemplate.spec.template.spec.containers[0]
, which just worked fine
What did you expect:
Cronjob gets updated
Anything else you would like to add:
[Additional information that will assist in solving the issue.]
Environment:
K8S 1.31.1
Kapp-controller: v0.53.1
kbld.k14s.io/images:
- origins:
- local:
path: /home/runner/work/kapp-controller/kapp-controller
- git:
dirty: true
remoteURL: https://github.com/carvel-dev/kapp-controller
sha: 00aa728d6823620c03e3f4917cd565119b17c7d2
tags:
- v0.53.1
url: ghcr.io/carvel-dev/kapp-controller@sha256:da1ac76b07c0961ec0a1573615cb8c121fd0a4c443a0bb7f73780242d05161f0
That is the used Template for the bundle:
#@ load("@ytt:data", "data")
#@ if data.values.k8s_version.startswith("v1.31."):
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: etcd-backup-restic
namespace: kube-system
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
schedule: '0,30 * * * *'
successfulJobsHistoryLimit: 0
suspend: false
jobTemplate:
spec:
template:
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
restartPolicy: OnFailure
volumes:
- name: etcd-backup
emptyDir: {}
- name: host-pki
hostPath:
path: /etc/kubernetes/pki
initContainers:
- name: snapshoter
image: #@ data.values.oci_registry_1 + "/bitnami/etcd:3.5.16"
securityContext:
runAsUser: 0
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |-
set -euf
mkdir -p /backup/pki/kubernetes
mkdir -p /backup/pki/etcd
cp -a /etc/kubernetes/pki/etcd/ca.crt /backup/pki/etcd/
cp -a /etc/kubernetes/pki/etcd/ca.key /backup/pki/etcd/
cp -a /etc/kubernetes/pki/ca.crt /backup/pki/kubernetes
cp -a /etc/kubernetes/pki/ca.key /backup/pki/kubernetes
cp -a /etc/kubernetes/pki/front-proxy-ca.crt /backup/pki/kubernetes
cp -a /etc/kubernetes/pki/front-proxy-ca.key /backup/pki/kubernetes
cp -a /etc/kubernetes/pki/sa.key /backup/pki/kubernetes
cp -a /etc/kubernetes/pki/sa.pub /backup/pki/kubernetes
etcdctl snapshot save /backup/etcd-snapshot.db
env:
- name: ETCDCTL_API
value: "3"
- name: ETCDCTL_DIAL_TIMEOUT
value: 3s
- name: ETCDCTL_CACERT
value: /etc/kubernetes/pki/etcd/ca.crt
- name: ETCDCTL_CERT
value: /etc/kubernetes/pki/etcd/healthcheck-client.crt
- name: ETCDCTL_KEY
value: /etc/kubernetes/pki/etcd/healthcheck-client.key
- name: ETCD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /backup
name: etcd-backup
- mountPath: /etc/kubernetes/pki
name: host-pki
readOnly: true
containers:
- name: uploader
image: #@ data.values.oci_registry_2 + "/restic/restic:0.17.1"
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |-
set -euf
restic snapshots -q || restic init -q
restic backup --tag=etcd --host=${ETCD_HOSTNAME} /backup
restic forget --prune --group-by tag --keep-daily 3 --keep-last 48
env:
- name: ETCD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: RESTIC_REPOSITORY
value: #@ "s3:" + str(data.values.s3_endpoint) + "/" + str(data.values.bucket_name)
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
name: s3-restic-credentials
key: restic_password
- name: AWS_DEFAULT_REGION
value: #@ str(data.values.default_region)
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: AWS_ACCESS_KEY_ID
name: s3-restic-credentials
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: AWS_SECRET_ACCESS_KEY
name: s3-restic-credentials
volumeMounts:
- mountPath: /backup
name: etcd-backup
#@ end
Metadata
Metadata
Assignees
Type
Projects
Status
To Triage