Skip to content

[Working] HyperShiftStack #92918

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from
21 changes: 21 additions & 0 deletions hosted_control_planes/hcp-deploy/hcp-deploy-openstack.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
:_mod-docs-content-type: ASSEMBLY
[id="hcp-deploy-openstack"]
include::_attributes/common-attributes.adoc[]
= Deploying {hcp} on OpenStack
:context: hcp-deploy-openstack

You can deploy {hcp} with hosted clusters that run on {rh-openstack-first}.

A _hosted cluster_ is an {product-title} cluster with its API endpoint and control plane that are hosted on the hosting cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the {mce-short} console or the `hcp` command-line interface (CLI) to create a hosted cluster.

include::modules/hosted-clusters-openstack-prerequisites.adoc[leveloffset=+1]

include::modules/hcp-deploy-openstack-parameters.adoc[leveloffset=+1]

include::modules/hcp-deploy-openstack-create.adoc[leveloffset=+1]

[role="_additional-resources"]
[id="additional-resources_{context}"]
== Additional resources

* xref:../../hosted_control_planes/hcp-prepare/hcp-requirements.adoc#hcp-requirements[Requirements for hosted control planes]
9 changes: 9 additions & 0 deletions hosted_control_planes/hcp-destroy/hcp-destroy-openstack.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
:_mod-docs-content-type: ASSEMBLY
[id="hcp-destroy-openstack"]
include::_attributes/common-attributes.adoc[]
= Destroying a hosted control plane on OpenStack
:context: hcp-destroy-openstack

toc::[]

include::modules/hosted-clusters-openstack-destroy.adoc[leveloffset=+1]
11 changes: 11 additions & 0 deletions hosted_control_planes/hcp-manage/hcp-manage-openstack.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
:_mod-docs-content-type: ASSEMBLY
[id="hcp-manage-openstack"]
include::_attributes/common-attributes.adoc[]
= Managing a hosted control plane on OpenStack
:context: hcp-manage-openstack

toc::[]

include::modules/hcp-manage-openstack-az.adoc[leveloffset=+1]
include::modules/hcp-manage-openstack-additional-ports.adoc[leveloffset=+1]
include::modules/hcp-manage-openstack-performance.adoc[leveloffset=+1]
27 changes: 27 additions & 0 deletions hosted_control_planes/hcp-prepare/hcp-prepare-openstack.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
:_mod-docs-content-type: ASSEMBLY
[id="hcp-prepare-openstack"]
include::_attributes/common-attributes.adoc[]
= Preparing to deploy a hosted control plane on OpenStack
:context: hcp-prepare-openstack

toc::[]

[IMPORTANT]
====
OpenStack support within HyperShift is currently in "dev-preview" and is not intended for production use. However, you can create and manage hosted clusters for development and testing purposes.
====

.Prerequisites

* A management {product-title} cluster is installed and running.
* A load balancer backend is installed in the management {product-title} cluster (for example, Octavia) so that the kube-api service can be created for each hosted cluster.
* A valid pull secret file for the `quay.io/openshift-release-dev` repository.
* OpenStack Octavia service is running in the cloud hosting the guest cluster when ingress is configured with an Octavia load balancer.
* The default external network (on which the kube-apiserver LoadBalancer type service is created) of the management {product-title} cluster must be reachable from the guest cluster.

include::modules/hcp-prepare-openstack-install-cli.adoc[leveloffset=+1]
include::modules/hcp-prepare-openstack-deploy-operator.adoc[leveloffset=+1]
include::modules/hcp-prepare-openstack-prepare-etcd.adoc[leveloffset=+1]
include::modules/hcp-prepare-openstack-upload-rhcos.adoc[leveloffset=+1]
include::modules/hcp-prepare-openstack-create-floating-ip.adoc[leveloffset=+1]
include::modules/hcp-prepare-openstack-update-dns.adoc[leveloffset=+1]
58 changes: 58 additions & 0 deletions modules/hcp-deploy-openstack-create.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
:_mod-docs-content-type: PROCEDURE
[id="hcp-deploy-openstack-create_{context}"]
= Creating a hosted cluster on OpenStack

You can create a hosted cluster on {rh-openstack-first} by using the `hcp` CLI.

.Prerequisites

* You completed all prerequisite steps in "Preparing to deploy hosted control planes".
* You completed all prerequisite steps in "Prerequisites for OpenStack".
* You have access to the management cluster.
* You have access to the {rh-openstack} cloud.

.Procedure

* Create a hosted cluster by running the following command:
+
[source,terminal]
----
$ hcp create cluster openstack --node-pool-flavor <node_pool_flavor>
----
+
--
where:

`<node_pool_flavor>`:: Specifies the flavor of the node pool of the cluster.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] RedHat.TermsErrors: Use 'version' or 'method' rather than 'flavor'. For more information, see RedHat.TermsErrors.

--
NOTE: Many options are available at cluster creation. For {rh-openstack}-specific options, see "Options for creating a Hosted Control Planes cluster on OpenStack". For general options, see the `hcp` documentation.

.Verification
* Verify that the hosted cluster is ready by running the following command on it:
+
[source,terminal]
----
$ oc -n clusters-<cluster_name> get pods
----
+
--
where:

`<cluster_name>`:: Specifies the name of the cluster.
--
+
After several minutes, the output should show that the hosted control plane pods are running.
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m
catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s
certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s
cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s
...
...
...
redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0
----
49 changes: 49 additions & 0 deletions modules/hcp-deploy-openstack-parameters.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
:_mod-docs-content-type: REFERENCE
[id="hcp-deploy-openstack-parameters_{context}"]
= Options for creating a Hosted Control Planes cluster on OpenStack

You can supply a number of options to the `hcp` CLI while deploying a Hosted Control Planes Cluster on {rh-openstack-first}.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] RedHat.TermsErrors: Use 'several' rather than 'a number of'. For more information, see RedHat.TermsErrors.


|===
|Option|Description|Required

|`--openstack-ca-cert-file`
|Path to the OpenStack CA certificate file.
|No

|`--openstack-cloud`
|Name of the cloud in `clouds.yaml`. The default value is `openstack`.
|No

|`--openstack-credentials-file`
|Path to the OpenStack credentials file.
|No

|`--openstack-dns-nameservers`
|List of DNS server addresses that are provided when creating the subnet.
|No

|`--openstack-external-network-id`
|ID of the OpenStack external network.
|No

|`--openstack-ingress-floating-ip`
|A floating IP for OpenShift ingress.
|No

|`--openstack-node-additional-port`
|Additional ports to attach to nodes. Valid values are: `network-id`, `vnic-type`, `disable-port-security`, and `address-pairs`.
|No

|`--openstack-node-availability-zone`
|Availability zone for the node pool.
|No

|`--openstack-node-flavor`
|Flavor for the node pool.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] RedHat.TermsErrors: Use 'version' or 'method' rather than 'Flavor'. For more information, see RedHat.TermsErrors.

|Yes

|`--openstack-node-image-name`
|Image name for the node pool.
|No
|===
93 changes: 93 additions & 0 deletions modules/hcp-manage-openstack-az.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-openstack.adoc
// TODO: For real.

:_mod-docs-content-type: PROCEDURE
[id="hcp-manage-openstack-az_{context}"]
= Configuring node pools for availability zones

You can distribute node pools across multiple {rh-openstack-first} Nova availability zones to improve the high availability of your hosted cluster.

NOTE: Availability zones do not necessarily correspond to fault domains and do not inherently provide high availability benefits.

.Prerequisites

* You created a hosted cluster.
* You have access to the management cluster.
* The `hcp` and `oc` CLIs are installed.

.Procedure

. Set environment variables that are appropriate for your needs. For example, if you want to create two additional machines in the `az1` availabilty zone, you could enter:
+
[source,terminal]
----
$ export NODEPOOL_NAME="${CLUSTER_NAME}-az1" \
&& export WORKER_COUNT="2" \
&& export FLAVOR="m1.xlarge" \
&& export AZ="az1"
----

. Create the node pool by using your environment variables by entering the following command:
+
[source,terminal]
----
$ hcp create nodepool openstack \
--cluster-name <cluster_name> \
--name $NODEPOOL_NAME \
--replicas $WORKER_COUNT \
--openstack-node-flavor $FLAVOR \
--openstack-node-availability-zone $AZ \
----
+
--
where:

`<cluster_name`>:: Specifies the name of your hosted cluster.
--

. Check the status of the node pool by listing `nodepool` resources in the clusters namespace by running the following command:
+
[source,terminal]
----
$ oc get nodepools --namespace clusters
----
.Example output
[source,terminal]
----
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
example example 5 5 False False 4.17.0
example-az1 example 2 False False True True Minimum availability requires 2 replicas, current 0 available
----

. Observe the notes as they ready on your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
----
+
.Example output
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
...
example-extra-az-zh9l5 Ready worker 2m6s v1.27.4+18eadca
example-extra-az-zr8mj Ready worker 102s v1.27.4+18eadca
...
----

. Verify that the node pool is created by running the following command:
+
[source,terminal]
----
$ oc get nodepools --namspace clusters
----
+
.Example output
[source,terminal]
----
NAME CLUSTER DESIRED CURRENT AVAILABLE PROGRESSING MESSAGE
<node_pool_name> <cluster_name> 2 2 2 False All replicas are available
----
4 changes: 4 additions & 0 deletions modules/hcp-support-matrix.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,10 @@ In the following table, the management cluster version is the {product-title} ve
|Non-bare-metal agent machines (Technology Preview)
|4.16 - 4.18
|4.16 - 4.18

|{rh-openstack-first} (Technology Preview)
|4.19
|4.19
|===

[id="hcp-matrix-updates_{context}"]
Expand Down
88 changes: 88 additions & 0 deletions modules/hosted-clusters-openstack-additional-ports.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hypershift-openstack.adoc

:_mod-docs-content-type: PROCEDURE
[id="hosted-clusters-openstack-additional-ports_{context}"]
= Configuring additional ports for node pools

You can configure additional ports for node pools to support advanced networking scenarios, such as SR-IOV or multiple networks.

== Use cases for additional ports for node pools

* **SR-IOV (Single Root I/O Virtualization)**: Enables a physical network device to appear as multiple virtual functions (VFs). By attaching additional ports to node pools, workloads can use SR-IOV interfaces to achieve low-latency, high-performance networking.

* **DPDK (Data Plane Development Kit)**: Provides fast packet processing in user space, bypassing the kernel. Node pools with additional ports can expose interfaces for workloads that use DPDK to improve network performance.

* **Manila RWX volumes on NFS**: Supports `ReadWriteMany` (RWX) volumes over NFS, allowing multiple nodes to access shared storage. Attaching additional ports to node pools enables workloads to reach the NFS network used by Manila.

* **Multus CNI**: Enables pods to connect to multiple network interfaces. Node pools with additional ports support use cases that require secondary network interfaces, including dual-stack connectivity and traffic separation.


== Options for additional ports for node pools

The --openstack-node-additional-port flag can be used to attach additional ports to nodes in a HostedCluster on OpenStack. The flag takes a list of parameters separated by commas. The parameter can be used multiple times to attach multiple additional ports to the nodes.

The parameters are:

|===
|Parameter|Description|Required|Default

|`network-id`
|The ID of the network to attach to the node.
|Yes
|N/A

|`vnic-type`
|The VNIC type to use for the port. If not specified, Neutron uses the default type `normal`.
|No
|N/A

|`disable-port-security`
|Whether to disable port security for the port. If not specified, Neutron enables port security unless it is explicitly disabled at the network level.
|No
|N/A

|`address-pairs`
|A list of IP address pairs to assign to the port. The format is `ip_address=mac_address`. Multiple pairs can be provided, separated by a hyphen (`-`). The `mac_address` portion is optional.
|No
|N/A
|===

== Creating additional ports for node pools

You can configure additional ports for node pools for hosted clusters that run on {rh-openstack-first}.

.Prerequisites

* You created a hosted cluster.
* You have access to the management cluster.
* The `hcp` CLI is installed.
* Additional networks are created in {rh-openstack}
* The project that is used by the hosted cluster must have access to the additional networks.

.Procedure

* Create a hosted cluster with additional ports attached to it by running the `hcp create nodepool openstack` command with the `--openstack-node-additional-port` options. For example:
+
[source,terminal]
----
$ hcp create nodepool openstack \
--cluster-name <cluster_name> \
--name <nodepool_name> \
--replicas <replica_count> \
--openstack-node-flavor <flavor> \
--openstack-node-additional-port "network-id=<sriov_net_id>,vnic-type=direct,disable-port-security=true" \
--openstack-node-additional-port "network-id=<lb_net_id>,address-pairs:192.168.0.1-192.168.0.2"
----
+
--
where:

`<cluster_name>`:: Specifies the name of the hosted cluster.
`<nodepool_name>`:: Specifies the name of the node pool.
`<replica_count>`:: Specifies the desired number of replicas.
`<flavor>`:: Specifies the {rh-openstack} flavor to use.
`<sriov_net_id>`:: Specifies a SR-IOV network ID.
`<lb_net_id>`:: Specifies a load balancer network ID.
--
Loading