Skip to content

[enterprise-4.19] [CNV-52364] IBM Z remove tp and update compatibility list #93490

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: enterprise-4.19
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 16 additions & 29 deletions virt/install/preparing-cluster-for-virt.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,15 +53,8 @@ include::snippets/technology-preview.adoc[]
endif::[]
--

* {ibm-z-name} or {ibm-linuxone-name} (s390x architecture) systems where an {product-title} cluster is installed in a logical partition (LPAR). See xref:../../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z_preparing-to-install-on-ibm-z[Preparing to install on {ibm-z-title} and {ibm-linuxone-title}].
+
--
ifdef::openshift-enterprise[]
:FeatureName: Using {VirtProductName} in a cluster deployed on s390x architecture
include::snippets/technology-preview.adoc[]
:!FeatureName:
endif::[]
--
* {ibm-z-name} or {ibm-linuxone-name} (s390x architecture) systems where an {product-title} cluster is installed in logical partitions (LPARs). See xref:../../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z_preparing-to-install-on-ibm-z[Preparing to install on {ibm-z-title} and {ibm-linuxone-title}].

endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]

ifdef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
Expand All @@ -85,33 +78,32 @@ ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
[id="ibm-z-linuxone-compatibility_{context}"]
=== {ibm-z-title} and {ibm-linuxone-title} compatibility

You can use {VirtProductName} in an {product-title} cluster that is installed in a logical partition (LPAR) on an {ibm-z-name} or {ibm-linuxone-name} (s390x architecture) system.

ifdef::openshift-enterprise[]
:FeatureName: Using {VirtProductName} in a cluster deployed on s390x architecture
include::snippets/technology-preview.adoc[]
:!FeatureName:
endif::[]
You can use {VirtProductName} in an {product-title} cluster that is installed in logical partitions (LPARs) on an {ibm-z-name} or {ibm-linuxone-name} (`s390x` architecture) system.

Some features are not currently available on s390x architecture, while others require workarounds or procedural changes. These lists are subject to change.
Some features are not currently available on `s390x` architecture, while others require workarounds or procedural changes. These lists are subject to change.

[discrete]
[id="currently-unavailable-ibm-z_{context}"]
==== Currently unavailable features

The following features are not available or do not function on s390x architecture:
The following features are currently not supported on `s390x` architecture:

* Memory hot plugging and hot unplugging
* Watchdog devices
* Node Health Check Operator
* SR-IOV Operator
* PCI passthrough
* {VirtProductName} cluster checkup framework
* {VirtProductName} on a cluster installed in FIPS mode
* IPv6
* {ibm-name} Storage scale
* {hcp-capital} for {VirtProductName}

The following features are not applicable on `s390x` architecture:

* virtual Trusted Platform Module (vTPM) devices
* {pipelines-title} tasks
* UEFI mode for VMs
* PCI passthrough
* USB host passthrough
* Configuring virtual GPUs
* {VirtProductName} cluster checkup framework
* Creating and managing Windows VMs

[discrete]
Expand All @@ -124,13 +116,6 @@ The following features are available for use on s390x architecture but function

* When xref:../../virt/managing_vms/advanced_vm_management/virt-configuring-default-cpu-model.adoc#virt-configuring-default-cpu-model_virt-configuring-default-cpu-model[configuring the default CPU model], the `spec.defaultCPUModel` value is `"gen15b"` for an {ibm-z-title} cluster.

* When xref:../../virt/vm_networking/virt-hot-plugging-network-interfaces.adoc#virt-hot-unplugging-bridge-network-interface_virt-hot-plugging-network-interfaces[hot unplugging a secondary network interface], the `virtctl migrate <vm_name>` command does not migrate the VM. As a workaround, restart the VM by running the following command:
+
[source,terminal]
----
$ virtctl restart <vm_name>
----

* When xref:../../virt/monitoring/virt-exposing-downward-metrics.adoc#virt-configuring-downward-metrics_virt-using-downward-metrics_virt-exposing-downward-metrics[configuring a downward metrics device], if you use a VM preference, the `spec.preference.name` value must be set to `rhel.9.s390x` or another available preference with the format `*.s390x`.

endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
Expand Down Expand Up @@ -249,6 +234,8 @@ You can configure one of the following high-availability (HA) options for your c
[NOTE]
====
In {product-title} clusters installed using installer-provisioned infrastructure and with a properly configured `MachineHealthCheck` resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/nodes/virt-node-maintenance.adoc#run-strategies[Run strategies] for more detailed information about the potential outcomes and how run strategies affect those outcomes.

Currently, IPI is not supported on {ibm-z-name}.
====

* Automatic high availability for both IPI and non-IPI is available by using the *Node Health Check Operator* on the {product-title} cluster to deploy the `NodeHealthCheck` controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the link:https://access.redhat.com/documentation/en-us/workload_availability_for_red_hat_openshift[Workload Availability for Red Hat OpenShift] documentation.
Expand Down