You can use {the-lc} {project-first} to migrate virtual machines from the following source providers to {virt} destination providers:
-
VMware vSphere
-
{rhv-full} ({rhv-short})
-
{osp}
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote {virt} clusters
The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.
This release has the following technical changes:
In earlier releases of {project-short}, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. {project-short} no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.
The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.
It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider
> Virtual Machines
tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.
virtual machine preferences
have replaced {ocp-name} templatesThe virtual machine preferences
have replaced {ocp-name} templates. {project-short} currently falls back to using {ocp-name} templates when a relevant preference is not available.
Custom mappings of guest operating system type to virtual machine preference can be configured using config
maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.
Migration from OVA moves from being a Technical Preview and is now a fully supported feature.
Running
state{project-short} creates the VM with its desired Running
state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)
must-gather
logs can now be loaded only by using the CLIThe {project-short} web console can no longer download logs. With this update, you must download must-gather
logs by using CLI commands. For more information, see Must Gather Operator.
pvc-init
pods when migrating from vSphere{project-short} no longer runs pvc-init
pods during cold migration from a vSphere provider to the {ocp-name} cluster that {project-short} is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested
annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer
.
This part provides features and enhancements introduced in {project-full} 2.6:
You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)
You can now specify a CA certificate that can be used to authenticate the API server of a remote {ocp-name} cluster. (MTV-728)
{project-short} enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)
{project-short} supports the migration of VMs that were created from images in {osp}. (MTV-644)
{project-short} supports migrations of VMs that are set with Fibre Channel (FC) LUNs from {rhv-short}. As with other LUN disks, you need to ensure the {ocp-name} nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in {rhv-short} and attached to the migrated VMs in {ocp-name}. (MTV-659)
{project-short} sets the CPU type of migrated VMs in {ocp-name} with their custom CPU type in {rhv-short}. In addition, a new option was added to migration plans that are set with {rhv-short} as a source provider to preserve the original CPU types of source VMs. When this option is selected, {project-short} identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)
Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in {ocp-name} Virtualization. With this update, a validation of RHEL 6 guest operating system was added to {ocp-name} Virtualization. (MTV413)
The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate
option is in the add-provider
dialog. This option was removed in the transition to the {ocp} console and has now been added to the console. This functionality is also available for {rhv-short}, {osp}, and {ocp-name} providers now. (MTV-737)
{project-short} validates the availability of a VDDK image that is specified for a vSphere provider on the target {ocp-name} name as part of the validation of a migration plan. {project-short} also checks whether the libvixDiskLib.so
symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)
{project-short} presents a warning when attempting to migrate a VM that is set with a TPM device from {rhv-short} or vSphere. The migrated VM in {ocp-name} would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)
With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)
The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)
This release has the following known issues:
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
vSphere only: Migrations from {rhv-short} and {osp} do not fail, but the encryption key might be missing on the target {ocp} cluster.
Warm migration from {rhv-short} fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)
hostPath
When migrating a VM with multiple disks to more than one storage class of type hostPath
, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target {ocp} cluster.
Warm migrations and migrations to remote {ocp} clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local {ocp} cluster. RHEL 8 and RHEL 9 might cause this limitation.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)
When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt
stage is running
but did not finish successfully. (MTV-963)
Migrating an image-based VM without the virtual_size
field can fail on a block mode storage class. (MTV-946)
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)
When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, {project-short} is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or {ocp-name} template. (MTV-1046)
default
projectThe migration process fails when migrating an image-based VM from {osp} to the default
project. (MTV-964)
For a complete list of all known issues in this release, see the list of Known Issues in Jira.
This release has the following resolved issues:
In {project-short} 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in {project-short} 2.6.1. (MTV-1067)
In {project-short} 2.6.0, migrations from one {ocp} cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in {ocp-name}, which was set to 2 hours by default. The problem was resolved in {project-short} 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)
Previously, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller
crashed, rendering {project-short} unusable. In {project-short} 2.6.1, {project-short} presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller
no longer crashes, although it cannot transfer the disk. (MTV-1029)
In the earlier versions of {project-short}, the PV was not removed when the OVA provider was deleted. This has been resolved in {project-short} 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)
In the earlier versions of {project-short}, when migrating a VM that has a snapshot from VMware, the VM that was created in {ocp-name} Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in {project-short} 2.6.0. (MTV-447)
populate
pods and PVCIn earlier releases of {project-short}, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate
pods, the populate
pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in {project-short} 2.6.0. (MTV-678)
In earlier releases of {project-short}, when migrating from {ocp} to {ocp}, the version of the source provider cluster had to be {ocp} version 4.13 or later. This issue has been resolved in {project-short} 2.6.0, with validation being shown when migrating from versions of {ocp-name} before 4.13. (MTV-734)
In earlier releases of {project-short}, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in {project-short} 2.6.0. (MTV-1008)
In earlier releases of {project-short}, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in {project-short} 2.6.0, as {project-short} now consumes the firmware that is detected by virt-v2v
during the conversion of the disks. (MTV-759)
In earlier releases of {project-short}, when configuring a transfer network for vSphere hosts, the console plugin created the Host
CR before creating its secret. The secret should be specified first in order to validate it before the Host
CR is posted. This issue has been resolved in {project-short} 2.6.0. (MTV-868)
ConnectionTestFailed
message appearsIn earlier releases of {project-short}, when adding an OVA provider, the error message ConnectionTestFailed
instantly appeared, although the provider had been created successfully. This issue has been resolved in {project-short} 2.6.0. (MTV-671)
ConnectionTestSucceeded
True response from the wrong URLIn earlier releases of {project-short}, the ConnectionTestSucceeded
condition was set to True
even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in {project-short} 2.6.0. (MTV-740)
In earlier releases of {project-short}, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter
in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in {project-short} 2.6.0. (MTV-796)
The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server
pod are now sent every five minutes to the forklift-controller
pod that updates the inventory. (MTV-733)
In earlier releases of {project-short}, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in {project-short} 2.6.0. (MTV-928)
CopyDisks
phase when there is an outdated ovirtvolumepopulatorIn earlier releases of {project-short}, an earlier failed migration could have left an outdated ovirtvolumepopulator
. When starting a new plan for the same VM to the same project, the CreateDataVolumes
phase did not create populator PVCs when transitioning to CopyDisks
, causing the CopyDisks
phase to stay indefinitely. This issue was resolved in {project-short} 2.6.0. (MTV-929)
For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.