Skip to content

kvm2: CRD timing issues with helm #20515

Open
@deas

Description

@deas

What Happened?

Helm releases using CRDs shipping with the chart frequently mostly fail for me using the kvm2 driver (Cannot reproduce the issues with the docker driver, but I need kvm2 driver for the disks).

For example:

minikube --cpus=4 --memory=8g --driver=kvm2 --network=default --profile demo start --wait=all
# Adding the following line  pretty much gets around the issue
# sleep 60
helm upgrade -i olm oci://ghcr.io/cloudtooling/helm-charts/olm --debug --version 0.30.0

The issue is not related to cluster creation. It is just very reproducible for me this way.

It fails because CRDs are not available at resource creation time, even though
the actual CRD creation succeeds. I understand helm 3 may not be perfect with regards to CRDs and there have been somewhat similar issues in the past, such as Helm v3 beta -- Wait for CRDs to be created #6316
.

Below is a small Makefile to reproduce conveniently:

MINIKUBE_START_ARGS=--cpus=4 --memory=8g --driver=kvm2 --network=default
# MINIKUBE_START_ARGS=--cpus=2 --driver=docker

.DEFAULT_GOAL := help

.PHONY: help
help:  ## Display this help
        @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n  make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf "  \033[36m%-20s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)


.PHONY: apply-demo
apply-demo: ## Apply demo 
        minikube $(MINIKUBE_START_ARGS) --profile demo start --wait=all
        kubectl get pod -A
        minikube --profile demo ssh "ps -axww"
        # The following line can be used to work around the issue with the kvm2 driver
        # sleep 60
        helm upgrade -i olm oci://ghcr.io/cloudtooling/helm-charts/olm --debug --version 0.30.0 || kubectl explain OLMConfig || true
        sleep 10
        kubectl explain OLMConfig

.PHONY: destroy-demo
destroy-demo: ## Destroy Demo
        minikube --profile demo delete

Below is the output of

make apply-demo
minikube --cpus=4 --memory=8g --driver=kvm2 --network=default --profile demo start --wait=all
* [demo] minikube v1.35.0 on Ubuntu 24.04
* Using the kvm2 driver based on user configuration
* Starting "demo" primary control-plane node in "demo" cluster
* Creating kvm2 VM (CPUs=4, Memory=8192MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.0 on Docker 27.4.0 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "demo" cluster and "default" namespace by default
kubectl get pod -A
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-668d6bf9bc-dnmrw       1/1     Running   0          9s
kube-system   etcd-demo                      1/1     Running   0          15s
kube-system   kube-apiserver-demo            1/1     Running   0          15s
kube-system   kube-controller-manager-demo   1/1     Running   0          15s
kube-system   kube-proxy-wggrm               1/1     Running   0          10s
kube-system   kube-scheduler-demo            1/1     Running   0          15s
kube-system   storage-provisioner            1/1     Running   0          9s
minikube --profile demo ssh "ps -axww"
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:04 /sbin/init noembed norestore
      2 ?        S      0:00 [kthreadd]
      3 ?        I<     0:00 [rcu_gp]
      4 ?        I<     0:00 [rcu_par_gp]
      5 ?        I      0:00 [kworker/0:0-rcu_gp]
      6 ?        I<     0:00 [kworker/0:0H-events_highpri]
      7 ?        I      0:00 [kworker/u8:0-ext4-rsv-conversion]
      8 ?        I<     0:00 [mm_percpu_wq]
      9 ?        S      0:00 [rcu_tasks_rude_]
     10 ?        S      0:00 [rcu_tasks_trace]
     11 ?        S      0:00 [ksoftirqd/0]
     12 ?        I      0:00 [rcu_sched]
     13 ?        S      0:00 [migration/0]
     14 ?        S      0:00 [cpuhp/0]
     15 ?        S      0:00 [cpuhp/1]
     16 ?        S      0:00 [migration/1]
     17 ?        S      0:00 [ksoftirqd/1]
     18 ?        I      0:00 [kworker/1:0-events]
     19 ?        I<     0:00 [kworker/1:0H-events_highpri]
     20 ?        S      0:00 [cpuhp/2]
     21 ?        S      0:00 [migration/2]
     22 ?        S      0:00 [ksoftirqd/2]
     23 ?        I      0:00 [kworker/2:0-ipv6_addrconf]
     24 ?        I<     0:00 [kworker/2:0H-events_highpri]
     25 ?        S      0:00 [cpuhp/3]
     26 ?        S      0:00 [migration/3]
     27 ?        S      0:00 [ksoftirqd/3]
     28 ?        I      0:00 [kworker/3:0-mm_percpu_wq]
     29 ?        I<     0:00 [kworker/3:0H-events_highpri]
     30 ?        S      0:00 [kdevtmpfs]
     31 ?        I<     0:00 [netns]
     32 ?        S      0:00 [kauditd]
     33 ?        I      0:00 [kworker/0:1-events]
     34 ?        S      0:00 [oom_reaper]
     35 ?        I<     0:00 [writeback]
     36 ?        S      0:00 [kcompactd0]
     37 ?        SN     0:00 [khugepaged]
     43 ?        I      0:00 [kworker/1:1-cgroup_destroy]
     51 ?        I<     0:00 [cryptd]
     63 ?        I<     0:00 [kblockd]
     64 ?        I<     0:00 [blkcg_punt_bio]
     65 ?        I<     0:00 [ata_sff]
     66 ?        I<     0:00 [md]
     67 ?        I      0:00 [kworker/2:1-events]
     68 ?        I<     0:00 [kworker/0:1H-kblockd]
     69 ?        I<     0:00 [rpciod]
     70 ?        I<     0:00 [kworker/u9:0-xprtiod]
     71 ?        I<     0:00 [xprtiod]
     72 ?        I<     0:00 [cfg80211]
     74 ?        I      0:00 [kworker/3:1-events]
     75 ?        S      0:00 [kswapd0]
     76 ?        I<     0:00 [nfsiod]
     77 ?        I<     0:00 [cifsiod]
     78 ?        I<     0:00 [smb3decryptd]
     79 ?        I<     0:00 [cifsfileinfoput]
     80 ?        I<     0:00 [cifsoplockd]
     81 ?        I<     0:00 [xfsalloc]
     82 ?        I<     0:00 [xfs_mru_cache]
     84 ?        I<     0:00 [acpi_thermal_pm]
     85 ?        I      0:00 [kworker/u8:1-flush-253:0]
     86 ?        S      0:00 [hwrng]
     87 ?        S      0:00 [scsi_eh_0]
     88 ?        I<     0:00 [scsi_tmf_0]
     89 ?        S      0:00 [scsi_eh_1]
     90 ?        I<     0:00 [scsi_tmf_1]
     91 ?        I      0:00 [kworker/u8:2-events_unbound]
     92 ?        I      0:00 [kworker/3:2-events]
     93 ?        I<     0:00 [dm_bufio_cache]
     94 ?        I<     0:00 [kmpathd]
     95 ?        I<     0:00 [kmpath_handlerd]
     96 ?        I<     0:00 [kworker/3:1H-kblockd]
     97 ?        I<     0:00 [ipv6_addrconf]
     98 ?        I<     0:00 [ceph-msgr]
    112 ?        I      0:00 [kworker/u8:3]
    142 ?        I<     0:00 [kworker/1:1H-kblockd]
    152 ?        Ss     0:00 /usr/sbin/rpcbind -w -f
    153 ?        Ss     0:00 /usr/lib/systemd/systemd-journald
    163 ?        I      0:00 [kworker/2:2-events]
    171 ?        Ss     0:00 /usr/lib/systemd/systemd-udevd
    173 ?        I<     0:00 [kworker/2:1H-kblockd]
    179 ?        Ss     0:00 /usr/lib/systemd/systemd-networkd
    203 ?        Ss     0:00 /usr/lib/systemd/systemd-resolved
    204 ?        Ssl    0:00 /usr/lib/systemd/systemd-timesyncd
    220 ?        Ss     0:00 /usr/sbin/acpid --foreground --netlink
    222 ?        Ss     0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    223 tty1     Ss+    0:00 /sbin/agetty -o -p -- \u --noclear - linux
    235 ttyS0    Ss+    0:00 /sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,9600 - vt220
    241 ?        Ss     0:00 /usr/lib/systemd/systemd-logind
    257 ?        I      0:00 [kworker/0:2-events_long]
    292 ?        S      0:00 [jbd2/vda1-8]
    293 ?        I<     0:00 [ext4-rsv-conver]
    360 ?        Ss     0:00 sshd: /usr/sbin/sshd -D -e [listener] 0 of 10-100 startups
    373 ?        Ss     0:00 /usr/sbin/rpc.statd
    382 ?        Ss     0:00 /usr/sbin/rpc.mountd
    385 ?        I<     0:00 [kworker/u9:1-xprtiod]
    386 ?        S      0:00 [lockd]
    423 ?        S      0:00 [nfsd]
    424 ?        S      0:00 [nfsd]
    425 ?        S      0:00 [nfsd]
    426 ?        S      0:00 [nfsd]
    427 ?        S      0:00 [nfsd]
    428 ?        S      0:00 [nfsd]
    429 ?        S      0:00 [nfsd]
    430 ?        S      0:00 [nfsd]
    683 ?        I      0:00 [kworker/1:2-events]
    841 ?        I      0:00 [kworker/u8:4-flush-253:0]
   1160 ?        Ssl    0:00 /usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.k8s.io/pause:3.10 --network-plugin=cni --hairpin-mode=hairpin-veth
   1273 ?        Ssl    0:00 /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
   1281 ?        Ssl    0:00 containerd --config /var/run/docker/containerd/containerd.toml
   1704 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6aed4d1e10252815e5d2c12c37dae273fe2a5108d12e77ac92b451c295faa0b1 -address /var/run/docker/containerd/containerd.sock
   1729 ?        Ss     0:00 /pause
   1763 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d429ddf01390d83184b7193e9d59f62230256713745e16f3940b62342c021ee6 -address /var/run/docker/containerd/containerd.sock
   1768 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id e7ef8f31bdf62d457240451eeee8dcfd31fe7ed9050e6fbe1d29877a24ed0b98 -address /var/run/docker/containerd/containerd.sock
   1775 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6677a598ee8137cd4618f6f295d2d8298bda4df74fe40a8b5066490da9235d83 -address /var/run/docker/containerd/containerd.sock
   1820 ?        Ss     0:00 /pause
   1838 ?        Ss     0:00 /pause
   1845 ?        Ss     0:00 /pause
   1880 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6002067b3d1d678a401d0ae21c29393eec760490b30e1614e4415b806a4bd75a -address /var/run/docker/containerd/containerd.sock
   1902 ?        Ssl    0:00 etcd --advertise-client-urls=https://192.168.122.44:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.122.44:2380 --initial-cluster=demo=https://192.168.122.44:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.122.44:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.122.44:2380 --name=demo --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
   1923 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d40d6eca18d74384c15eb6f808d6ffacedf21ddfaeed7576f3cec64cedcfdff7 -address /var/run/docker/containerd/containerd.sock
   1943 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id fb572cdaa9aacac0e3e386c1c6dc821ce4d060a61e14df77af52d56af65069de -address /var/run/docker/containerd/containerd.sock
   1960 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 57fbf170900a2d1ccaf30508027aa4da84228e6e80fec2b02ea6c90f0501777e -address /var/run/docker/containerd/containerd.sock
   1965 ?        Ssl    0:00 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=false
   1996 ?        Ssl    0:00 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
   2028 ?        Ssl    0:02 kube-apiserver --advertise-address=192.168.122.44 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
   2089 ?        Ssl    0:00 /var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=demo --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.122.44
   2269 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6c02dd1f5937819f0e741da8d4f7b3ab4ce19818af12164d703e250058b7ed1e -address /var/run/docker/containerd/containerd.sock
   2297 ?        Ss     0:00 /pause
   2324 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 87e369841fb0557c539087bf51b0d5e80c9084d8f97460ee3910f4f528bd7f2f -address /var/run/docker/containerd/containerd.sock
   2346 ?        Ssl    0:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=demo
   2465 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 620cb1a03a74a69fb4d55fdfd35025f69d5b38737afc8c713111c4896bd7f632 -address /var/run/docker/containerd/containerd.sock
   2523 ?        Ss     0:00 /pause
   2745 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 49c7043a5bd6d4db4780137aea46a0d1c4bbae08563ef8ba81b8e49724db6b47 -address /var/run/docker/containerd/containerd.sock
   2766 ?        Ssl    0:00 /coredns -conf /etc/coredns/Corefile
   2795 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d97f4d33f2751fea1e6863cbccc164137a2f80526238783ae8273ddcd625df51 -address /var/run/docker/containerd/containerd.sock
   2816 ?        Ss     0:00 /pause
   2837 ?        Sl     0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 103f8fa456af7d15c4c694cb52ca433f2fb8566262e415a8ce70a4e702224204 -address /var/run/docker/containerd/containerd.sock
   2858 ?        Ssl    0:00 /storage-provisioner
   2958 ?        I      0:00 [kworker/0:3]
   2965 ?        S      0:00 /usr/lib/systemd/systemd-udevd
   2966 ?        S      0:00 /usr/lib/systemd/systemd-udevd
   3003 ?        I      0:00 [kworker/1:3]
   3024 ?        Ss     0:00 sshd: docker [priv]
   3026 ?        R      0:00 sshd: docker@pts/0
   3027 pts/0    Rs+    0:00 ps -axww
# The following line can be used to work around the issue with the kvm2 driver
# sleep 60
helm upgrade -i olm oci://ghcr.io/cloudtooling/helm-charts/olm --debug --version 0.30.0 || kubectl explain OLMConfig || true
history.go:56: 2025-03-10 16:08:14.65507293 +0100 CET m=+0.030929007 [debug] getting history for release olm
Release "olm" does not exist. Installing it now.
install.go:225: 2025-03-10 16:08:14.657653044 +0100 CET m=+0.033509115 [debug] Original chart version: "0.30.0"
time="2025-03-10T16:08:14+01:00" level=debug msg=resolving host=ghcr.io
time="2025-03-10T16:08:14+01:00" level=debug msg="do request" host=ghcr.io request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=Helm/3.17.1 request.method=HEAD url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/manifests/0.30.0"
time="2025-03-10T16:08:14+01:00" level=debug msg="fetch response received" host=ghcr.io response.header.content-length=73 response.header.content-type=application/json response.header.date="Mon, 10 Mar 2025 15:08:27 GMT" response.header.www-authenticate="Bearer realm=\"https://ghcr.io/token\",service=\"ghcr.io\",scope=\"repository:cloudtooling/helm-charts/olm:pull\"" response.header.x-github-request-id="E88A:165BE5:1E601E:1EB0A2:67CF006B" response.status="401 Unauthorized" url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/manifests/0.30.0"
time="2025-03-10T16:08:14+01:00" level=debug msg=Unauthorized header="Bearer realm=\"https://ghcr.io/token\",service=\"ghcr.io\",scope=\"repository:cloudtooling/helm-charts/olm:pull\"" host=ghcr.io
time="2025-03-10T16:08:14+01:00" level=debug msg="do request" host=ghcr.io request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=Helm/3.17.1 request.method=HEAD url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/manifests/0.30.0"
time="2025-03-10T16:08:15+01:00" level=debug msg="fetch response received" host=ghcr.io response.header.content-length=758 response.header.content-type=application/vnd.oci.image.manifest.v1+json response.header.date="Mon, 10 Mar 2025 15:08:28 GMT" response.header.docker-content-digest="sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d\"" response.header.x-github-request-id="E88A:165BE5:1E6096:1EB10E:67CF006C" response.status="200 OK" url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/manifests/0.30.0"
time="2025-03-10T16:08:15+01:00" level=debug msg=resolved desc.digest="sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d" host=ghcr.io
time="2025-03-10T16:08:15+01:00" level=debug msg="do request" digest="sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d" request.header.accept="application/vnd.oci.image.manifest.v1+json, */*" request.header.user-agent=Helm/3.17.1 request.method=GET url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/manifests/sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d"
time="2025-03-10T16:08:15+01:00" level=debug msg="fetch response received" digest="sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d" response.header.content-length=758 response.header.content-type=application/vnd.oci.image.manifest.v1+json response.header.date="Mon, 10 Mar 2025 15:08:28 GMT" response.header.docker-content-digest="sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d\"" response.header.x-github-request-id="E88A:165BE5:1E60F2:1EB17D:67CF006C" response.status="200 OK" url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/manifests/sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d"
time="2025-03-10T16:08:15+01:00" level=debug msg="do request" digest="sha256:fc4a636ecf00953778ce054e5d2efcbf38474085e038c2240d8ecd5bfe1dab4e" request.header.accept="application/vnd.cncf.helm.chart.content.v1.tar+gzip, */*" request.header.user-agent=Helm/3.17.1 request.method=GET url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/blobs/sha256:fc4a636ecf00953778ce054e5d2efcbf38474085e038c2240d8ecd5bfe1dab4e"
time="2025-03-10T16:08:15+01:00" level=debug msg="do request" digest="sha256:bfbf7b9214a8d57271c49335af7e72034a466ed244312fc26fff013834daa42a" request.header.accept="application/vnd.cncf.helm.config.v1+json, */*" request.header.user-agent=Helm/3.17.1 request.method=GET url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/blobs/sha256:bfbf7b9214a8d57271c49335af7e72034a466ed244312fc26fff013834daa42a"
time="2025-03-10T16:08:16+01:00" level=debug msg="fetch response received" digest="sha256:bfbf7b9214a8d57271c49335af7e72034a466ed244312fc26fff013834daa42a" response.header.accept-ranges=bytes response.header.age=0 response.header.content-disposition= response.header.content-length=288 response.header.content-type=application/octet-stream response.header.date="Mon, 10 Mar 2025 15:08:29 GMT" response.header.etag="\"0x8DD03050838C3DE\"" response.header.last-modified="Tue, 12 Nov 2024 10:30:30 GMT" response.header.server="Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0" response.header.strict-transport-security="max-age=31536000" response.header.via="1.1 varnish, 1.1 varnish" response.header.x-cache="MISS, MISS" response.header.x-cache-hits="0, 0" response.header.x-fastly-request-id=e0207ef983a38b0ffa2cd2cedaada80af1fa142d response.header.x-ms-blob-type=BlockBlob response.header.x-ms-copy-completion-time="Tue, 12 Nov 2024 10:30:30 GMT" response.header.x-ms-copy-id=1a39b1c1-94fa-469f-80b7-e99e72dc3275 response.header.x-ms-copy-progress=288/288 response.header.x-ms-copy-status=success response.header.x-ms-creation-time="Tue, 12 Nov 2024 10:30:30 GMT" response.header.x-ms-lease-state=available response.header.x-ms-lease-status=unlocked response.header.x-ms-request-id=c05e7f10-301e-0007-2dce-91f537000000 response.header.x-ms-server-encrypted=true response.header.x-ms-version=2019-12-12 response.header.x-served-by="cache-iad-kjyo7100168-IAD, cache-fra-etou8220149-FRA" response.status="200 OK" url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/blobs/sha256:bfbf7b9214a8d57271c49335af7e72034a466ed244312fc26fff013834daa42a"
time="2025-03-10T16:08:16+01:00" level=debug msg="encountered unknown type application/vnd.cncf.helm.config.v1+json; children may not be fetched"
time="2025-03-10T16:08:16+01:00" level=debug msg="fetch response received" digest="sha256:fc4a636ecf00953778ce054e5d2efcbf38474085e038c2240d8ecd5bfe1dab4e" response.header.accept-ranges=bytes response.header.age=0 response.header.content-disposition= response.header.content-length=119252 response.header.content-type=application/octet-stream response.header.date="Mon, 10 Mar 2025 15:08:29 GMT" response.header.etag="\"0x8DD03050830065D\"" response.header.last-modified="Tue, 12 Nov 2024 10:30:30 GMT" response.header.server="Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0" response.header.strict-transport-security="max-age=31536000" response.header.via="1.1 varnish, 1.1 varnish" response.header.x-cache="MISS, MISS" response.header.x-cache-hits="0, 0" response.header.x-fastly-request-id=3608202b784906ca5b8306dda39bf84a2bda0cf3 response.header.x-ms-blob-type=BlockBlob response.header.x-ms-copy-completion-time="Tue, 12 Nov 2024 10:30:30 GMT" response.header.x-ms-copy-id=7696dfc3-b3f2-4ecd-8e16-ca8235b14358 response.header.x-ms-copy-progress=119252/119252 response.header.x-ms-copy-status=success response.header.x-ms-creation-time="Tue, 12 Nov 2024 10:30:30 GMT" response.header.x-ms-lease-state=available response.header.x-ms-lease-status=unlocked response.header.x-ms-request-id=24e8099a-201e-00d0-0ace-91a402000000 response.header.x-ms-server-encrypted=true response.header.x-ms-version=2019-12-12 response.header.x-served-by="cache-iad-kjyo7100058-IAD, cache-fra-etou8220149-FRA" response.status="200 OK" url="https://ghcr.io/v2/cloudtooling/helm-charts/olm/blobs/sha256:fc4a636ecf00953778ce054e5d2efcbf38474085e038c2240d8ecd5bfe1dab4e"
time="2025-03-10T16:08:16+01:00" level=debug msg="encountered unknown type application/vnd.cncf.helm.chart.content.v1.tar+gzip; children may not be fetched"
Pulled: ghcr.io/cloudtooling/helm-charts/olm:0.30.0
Digest: sha256:9f075d8cd4417b02f34abb288bf3a4a25b8ac45314be8350dff6dc47fe61bf3d
install.go:242: 2025-03-10 16:08:16.074124694 +0100 CET m=+1.449980763 [debug] CHART PATH: /home/deas/.cache/helm/repository/olm-0.30.0.tgz

client.go:142: 2025-03-10 16:08:16.087046684 +0100 CET m=+1.462902757 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.131155511 +0100 CET m=+1.507011583 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.210403992 +0100 CET m=+1.586260064 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.233155085 +0100 CET m=+1.609011169 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.258343385 +0100 CET m=+1.634199458 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.271596101 +0100 CET m=+1.647452173 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.281493788 +0100 CET m=+1.657349866 [debug] creating 1 resource(s)
client.go:142: 2025-03-10 16:08:16.298108051 +0100 CET m=+1.673964124 [debug] creating 1 resource(s)
wait.go:50: 2025-03-10 16:08:16.32747589 +0100 CET m=+1.703331962 [debug] beginning wait for 8 resources with timeout of 1m0s
install.go:212: 2025-03-10 16:08:18.395390865 +0100 CET m=+3.771246937 [debug] Clearing REST mapper cache
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "operatorhubio-catalog" namespace: "operator-lifecycle-manager" from "": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "packageserver" namespace: "operator-lifecycle-manager" from "": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "cluster" namespace: "" from "": no matches for kind "OLMConfig" in version "operators.coreos.com/v1"
ensure CRDs are installed first, resource mapping not found for name: "global-operators" namespace: "operators" from "": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
ensure CRDs are installed first, resource mapping not found for name: "olm-operators" namespace: "operator-lifecycle-manager" from "": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
ensure CRDs are installed first]
helm.go:86: 2025-03-10 16:08:18.600965899 +0100 CET m=+3.976821971 [debug] [resource mapping not found for name: "operatorhubio-catalog" namespace: "operator-lifecycle-manager" from "": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "packageserver" namespace: "operator-lifecycle-manager" from "": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "cluster" namespace: "" from "": no matches for kind "OLMConfig" in version "operators.coreos.com/v1"
ensure CRDs are installed first, resource mapping not found for name: "global-operators" namespace: "operators" from "": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
ensure CRDs are installed first, resource mapping not found for name: "olm-operators" namespace: "operator-lifecycle-manager" from "": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
ensure CRDs are installed first]
unable to build kubernetes objects from release manifest
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
	helm.sh/helm/v3/pkg/action/install.go:334
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:317
main.newUpgradeCmd.func2
	helm.sh/helm/v3/cmd/helm/upgrade.go:160
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:1041
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:85
runtime.main
	runtime/proc.go:272
runtime.goexit
	runtime/asm_amd64.s:1700
the server doesn't have a resource type "OLMConfig"
sleep 10
kubectl explain OLMConfig
GROUP:      operators.coreos.com
KIND:       OLMConfig
VERSION:    v1

DESCRIPTION:
    OLMConfig is a resource responsible for configuring OLM.
    
FIELDS:
  apiVersion	<string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind	<string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata	<ObjectMeta> -required-
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec	<Object>
    OLMConfigSpec is the spec for an OLMConfig resource.

  status	<Object>
    OLMConfigStatus is the status for an OLMConfig resource.


Using

make MINIKUBE_START_ARGS=--driver=docker apply-demo

instead works for me.

Attach the log file

log.txt

Operating System

Ubuntu

Driver

KVM2

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions