Skip to content

[OSDOCS-13478]: Stub topic for HCP 4.19 release notes #92043

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: enterprise-4.19
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 12 additions & 64 deletions hosted_control_planes/hosted-control-planes-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,57 +8,20 @@ toc::[]

Release notes contain information about new and deprecated features, changes, and known issues.

[id="hcp-4-18-release-notes_{context}"]
== {hcp-capital} release notes for {product-title} 4.18
[id="hcp-4-19-release-notes_{context}"]
== {hcp-capital} release notes for {product-title} 4.19

With this release, {hcp} for {product-title} 4.18 is available. {hcp-capital} for {product-title} 4.18 supports {mce} version 2.8.
With this release, {hcp} for {product-title} 4.19 is available. {hcp-capital} for {product-title} 4.19 supports {mce} version 2.9.

[id="hcp-4-18-new-features-and-enhancements_{context}"]
[id="hcp-4-10-new-features-and-enhancements_{context}"]
=== New features and enhancements

This release adds improvements related to the following concepts:

[id="hcp-4-18-hcp-ocp_{context}"]
==== Comparing {hcp} with {product-title}

The {product-title} documentation now highlights the differences between {hcp} and standalone {product-title}. For more information, see xref:../hosted_control_planes/index.adoc#hcp-ocp-differences_hcp-overview[Differences between {hcp} and {product-title}].

[id="hcp-4-18-proxy_{context}"]
==== Proxy configuration for {hcp}

Configuring proxy support for {hcp} has a few differences from configuring proxy support for standalone {product-title}. For more information, see xref:../hosted_control_planes/hcp-networking.adoc[Networking for {hcp}].

[id="hcp-4-18-runc-crun_{context}"]
==== Default container runtime for worker nodes is crun

In {hcp} for {product-title} 4.18 or later, the default container runtime for worker nodes is changed from runC to crun.

[id="bug-fixes-hcp-rn-4-18_{context}"]
[id="bug-fixes-hcp-rn-4-19_{context}"]
=== Bug fixes

* Previously, you could not create node pools that use ARM64 architecture on a {hcp} cluster with the platform parameter set to either `agentBareMetal` or `none`. This issue did not exist for a {hcp} cluster that runs on an {aws-short} or an {azure-full} platform. With this release, a fix ensures that you can now create node pools that use ARM64 architecture in a {hcp} cluster that has `platform` set to either `agentBareMetal` or `none`. (link:https://issues.redhat.com/browse/OCPBUGS-48579[*OCPBUGS-48579*])

* Previously, when you attempted to use the {hcp} CLI to create a cluster in a disconnected environment, the installation command failed. An issue existed with the registry that hosts the command. With this release, a fix to the command registry means that you can use the {hcp} CLI to create a cluster in a disconnected environment. (link:https://issues.redhat.com/browse/OCPBUGS-48170[*OCPBUGS-48170*])

* Previously, incorrect addresses were being passed to the Kubernetes EndpointSlice on a cluster, and this issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red{nbsp}Hat Marketplace pods can now successfully connect to the cluster API server, so that the installation of MetalLB Operator and handling of ingress traffic in IPv6 disconnected environments can occur. (link:https://issues.redhat.com/browse/OCPBUGS-46665[*OCPBUGS-46665*])

* Previously, in {hcp} on the Agent platform, ARM64 architecture was not allowed in the `NodePool` API. As a consequence, heterogeneous clusters could not be deployed on the Agent platform. In this release, the API now allows ARM64 architecture node pools on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46373[*OCPBUGS-4673*])

* Previously, the default `node-monitor-grace-period` value was 50 seconds. As a consequence, nodes did not stay ready for the duration of time that Kubernetes components needed to reconnect, coordinate, and complete their requests. With this release, the default `node-monitor-grace-period` value is 55 seconds. As a result, the issue is resolved and deployments have enough time to be completed. (link:https://issues.redhat.com/browse/OCPBUGS-46008[*OCPBUGS-46008*])

* Previously, the provider ID for the `IBMPowerVSMachine` object was not populated properly, due to the improper retrieval of IBM Cloud Workspace ID. As a consequence, any certificate signing requests (CSRs) were pending in hosted clusters. With this release, the provider ID is properly populated for the `IBMPowerVSMachine` object. As a result, no CSRs are pending, and all COs are moved to the available state. (link:https://issues.redhat.com/browse/OCPBUGS-44944[*OCPBUGS-44944*])

* Previously, when you created a hosted cluster by using a shared VPC where the private DNS hosted zones existed in the cluster creator account, the private link controller failed to create the route53 DNS records in the local zone. With this release, the ingress shared role adds records to the private link controller. The VPC endpoint is used to share the role to create the VPC endpoint in the VPC owner account. A hosted cluster is created in a shared VPC configuration, where the private hosted zones exist in the cluster creator account. (link:https://issues.redhat.com/browse/OCPBUGS-44630[*OCPBUGS-44630*])

* Previously, when the hosted cluster `controllerAvailabilityPolicy` parameter was set to `SingleReplica`, the `podAntiAffinity` rules on networking components blocked the availability of the components. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-39313[*OCPBUGS-39313*])

* Previously, periodic conformance jobs in {hcp} failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected. (link:https://issues.redhat.com/browse/OCPBUGS-38943[*OCPBUGS-38943*])

* Previously, you could not create `NodePool` resources with ARM64 architecture on non-AWS or {azure-short} platforms. This bug resulted in validation errors that prevented the addition of bare-metal compute nodes and caused CEL validation blocks when creating a `NodePool` resource. The fix modifies the `NodePool` spec validation rules by adding `"self.platform.type == 'None'"`. As a result, you can now create `NodePool` resources with ARM64 architecture specifications on non-AWS or {azure-short} bare-metal platforms, expanding functionality. (link:https://issues.redhat.com/browse/OCPBUGS-46440[*OCPBUGS-46440*])

* Previously, when you created a hosted cluster in a shared VPC, the private link controller sometimes failed to assume the shared VPC role to manage the VPC endpoints in the shared VPC. With this release, a client is created for every reconciliation in the private link controller so that you can recover from invalid clients. As a result, the hosted cluster endpoints and the hosted cluster are created successfully. (link:https://issues.redhat.com/browse/OCPBUGS-45184[*OCPBUGS-45184*])

[id="known-issues-hcp-rn-4-18_{context}"]
[id="known-issues-hcp-rn-4-19_{context}"]
=== Known issues

* If the annotation and the `ManagedCluster` resource name do not match, the {mce} console displays the cluster as `Pending import`. The cluster cannot be used by the {mce-short}. The same issue happens when there is no annotation and the `ManagedCluster` name does not match the `Infra-ID` value of the `HostedCluster` resource.
Expand Down Expand Up @@ -119,45 +82,30 @@ For {ibm-power-title} and {ibm-z-title}, you must run the control plane on machi
.{hcp-capital} GA and TP tracker
[cols="4,1,1,1",options="header"]
|===
|Feature |4.16 |4.17 |4.18

|{hcp-capital} for {product-title} on {aws-first}
|General Availability
|General Availability
|General Availability

|{hcp-capital} for {product-title} on bare metal
|General Availability
|General Availability
|General Availability

|{hcp-capital} for {product-title} on {VirtProductName}
|General Availability
|General Availability
|General Availability
|Feature |4.17 |4.18 |4.19

|{hcp-capital} for {product-title} using non-bare-metal agent machines
|Technology Preview
|Technology Preview
|Technology Preview

|{hcp-capital} for an ARM64 {product-title} cluster on {aws-full}
|Technology Preview
|General Availability
|General Availability
|General Availability

|{hcp-capital} for {product-title} on {ibm-power-title}
|Technology Preview
|General Availability
|General Availability
|General Availability

|{hcp-capital} for {product-title} on {ibm-z-title}
|Technology Preview
|General Availability
|General Availability
|General Availability

|{hcp-capital} for {product-title} on {rh-openstack}
|Not Available
|Developer Preview
|Developer Preview
|===
|Developer Preview
|===