|
| 1 | +# Compute Starter Kit |
| 2 | + |
| 3 | +**Based on OpenStack K8S operators from the "main" branch of the [OpenStack Operator repo](https://github.com/openstack-k8s-operators/openstack-operator/)** |
| 4 | + |
| 5 | +This is a collection of CR templates that represent a openstack deployment |
| 6 | +topology that has the following characteristics: |
| 7 | + |
| 8 | +- Single noe OpenShift cluster (CRC) |
| 9 | +- 1-replica of all deployed services |
| 10 | +- OVN networking |
| 11 | +- Network isolation over a single NIC |
| 12 | +- 2 compute nodes |
| 13 | +- deploy openstack services: keystone, glace, placement, neutron, nova |
| 14 | + |
| 15 | +## Purpose |
| 16 | + |
| 17 | +When new users first approach OpenStack as a project, they are presented with a vast and wonderful array of choices of components they could choose to begin with. This array is so vast and wonderful that it becomes really hard for people to understand where to start, be confident that the decisions they make will not prevent them from deploying something usable, and ensure they are able to expand the scope of their OpenStack over time. This deployment topology intends to define the smallest subset of projects that allow the user to provide a cloud capable of booting a VM. This is ideal for CI workloads, development clusters, small edge or business clusters with limited resources, or as a building block for more complex topologies. This deployment topology will be used as the minimal smoke test for promoting compute components. |
| 18 | + |
| 19 | +## Node topology |
| 20 | +| Node role | bm/vm | amount | |
| 21 | +| ------------------------------------------------| ----- | ------ | |
| 22 | +| Openshift master/worker combo-node cluster (CRC)| vm | 1 | |
| 23 | +| Compute nodes | vm | 2 | |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +## Services, enabled features and configurations |
| 28 | + |
| 29 | +| Service | configuration | Lock-in coverage? | |
| 30 | +| ------------------------------------------- | ------------------------------- | ------------------ | |
| 31 | +| RabbitMQ | default | Must have | |
| 32 | +| OVN | default | Must have | |
| 33 | +| galara | default | Must have | |
| 34 | +| Glance | filestore | Must have | |
| 35 | +| Keystone | default | Must have | |
| 36 | +| placement | default | Must have | |
| 37 | +| nova | default | Must have | |
| 38 | +| neutron | default | Must have | |
| 39 | + |
| 40 | +No other services should be added to this deployment topology as it's important that this model the minimum set of services that are required to provide the ability to boot a VM. |
| 41 | + |
| 42 | +### Support services |
| 43 | + |
| 44 | +Additional services required for integration testing that may not be the subject of this DT |
| 45 | + |
| 46 | +| Service | Reason | |
| 47 | +| -------- | ------------------------------------------------- | |
| 48 | +| tempest | validation fo basic functionality | |
| 49 | +| whitebox | validation of non hardware specific functionally | |
| 50 | +| FIPS | Enabled by default | |
| 51 | + |
| 52 | +### Additional configuration |
| 53 | + |
| 54 | +The DT crs will use the defaults for most services. |
| 55 | +As a result, we shall likely create two job variants: |
| 56 | +one using the defaults |
| 57 | + anonymous memory |
| 58 | + images-type=qcow |
| 59 | + FIPS enabled |
| 60 | + TLS-E enabled |
| 61 | +hugepages and file-backed memory are mutually exclusive so we will |
| 62 | +use this DT to test file-backed memory in a variant job. |
| 63 | +Nova may use images-type=flat with force_raw_images=true. |
| 64 | + |
| 65 | +As a result, we shall likely create two job variants: |
| 66 | +one using the defaults: |
| 67 | + anonymous memory |
| 68 | + images-type=qcow |
| 69 | + FIPS enabled |
| 70 | + TLS-E enabled |
| 71 | + neither cpu_shared_set nor cpu_dedicated_set defined (CPU pinning unsupported) |
| 72 | + hugepages |
| 73 | +and the other with the overrides described below: |
| 74 | + file-backed memory |
| 75 | + images-type=flat |
| 76 | + force_raw_images=true |
| 77 | + FIPS disabled |
| 78 | + TLS-E disabled |
| 79 | + cpu_shared_set and cpu_dedicated_set defined (CPU pinning supported) |
| 80 | + |
| 81 | +## Constraints |
| 82 | + |
| 83 | +No additional OpenStack services can be added to this DT and it cannot be combined with others. |
| 84 | +This job will be capable of testing block-based live migration with local non-shared storage. |
| 85 | +As such, other DTs will not need to duplicate that testing and can cover block-based shared storage |
| 86 | +migration and nova-provisioned Ceph storage. Ceph and Cinder are intentionally not part of this DT |
| 87 | +as it is not required to meet the definition of the minimal set of services required to boot |
| 88 | +a usable VM. As such, this DT will not test interaction with the Cinder service. |
| 89 | +Similarly, Barbican integration, which is required for vTPM, is not tested in this DT as it |
| 90 | +is also out of scope. Barbican and Cinder integration should be tested in other compute |
| 91 | +or common DTs. |
| 92 | + |
| 93 | + |
| 94 | +## Testing tree |
| 95 | + |
| 96 | +| Test framework | Stage to run | Special configuration | Test case to report | |
| 97 | +| ---------------- | ------------ | --------------------- | :-----------------: | |
| 98 | +| Tempest/compute | stage5 | Use cirros image | N/A | |
| 99 | +| Tempest/scenairo | stage5 | Use cirros image | N/A | |
| 100 | +| Tempest/whitebox | stage5 | applicable subset | N/A | |
| 101 | + |
| 102 | + |
| 103 | +## Considerations |
| 104 | + |
| 105 | +1. These CRs are generic, but they nonetheless require customization for the particular environment in which they are utilized. In this sense they are _templates_ meant to be consumed and tweaked to fit the specific constraints of the hardware available. |
| 106 | + |
| 107 | +2. The CRs are applied against an OpenShift cluster in _stages_. That is, there is an ordering in which each grouping of CRs is fed to the cluster. It is _not_ a case of simply taking all CRs from all stages and applying them all at once. |
| 108 | + |
| 109 | +3. In stages 1 and 2 [kustomize](https://kustomize.io/) is used to genereate the CRs dynamically. The `values.yaml` file(s) must be updated to fit your environment. kustomize version 5 or newer required. |
| 110 | + |
| 111 | +4. In stage 3 YAML comments are placed throughout the CRs to aid in the process of customizing the CRs. Fields that _must_ (or most likely need to be) changed are commented with "# CHANGEME" either on the field itself or somewhere nearby. Other comments are added to explain fields that can be changed and, sometimes, to explain additions that can be made. |
| 112 | + |
| 113 | +## Stages |
| 114 | + |
| 115 | +All stages must be executed in the order listed below. Everything is required unless otherwise indicated. |
| 116 | + |
| 117 | +1. [Install the OpenStack K8S operators and their dependencies]../(../../common/) |
| 118 | +2. [Configuring networking and deploy the OpenStack control plane](control-plane.md) |
| 119 | +3. [Configure and deploy the external data plane to provide compute nodes](edpm) |
0 commit comments