diff --git a/content/patterns/modern-virtualization/_index.md b/content/patterns/modern-virtualization/_index.md new file mode 100644 index 000000000..c464db21d --- /dev/null +++ b/content/patterns/modern-virtualization/_index.md @@ -0,0 +1,77 @@ +--- +title: Modern Virtualization +date: 2022-06-08 +tier: maintained +summary: This pattern uses OpenShift Virtualization to simulate an edge environment for VMs. +rh_products: +- Red Hat OpenShift Container Platform +- Red Hat Ansible Automation Platform +- Red Hat OpenShift Virtualization +- Red Hat Enterprise Linux +- Red Hat OpenShift Data Foundation +industries: +- Chemical +aliases: /openshift-virtualization/ +pattern_logo: ansible-edge.png +links: + install: getting-started + help: https://groups.google.com/g/validatedpatterns + bugs: https://github.com/validatedpatterns/ansible-edge-gitops/issues +ci: aegitops +--- + +# Ansible Edge GitOps + +## Background + +Organizations are interested in accelerating their deployment speeds and improving delivery quality in their Edge environments, where many devices may not fully or even partially embrace the GitOps philosophy. Further, there are VMs and other devices that can and should be managed with Ansible. This pattern explores some of the possibilities of using an OpenShift-based Ansible Automated Platform deployment and managing Edge devices, based on work done with a partner in the Chemical space. + +This pattern uses OpenShift Virtualization (the productization of Kubevirt) to simulate the Edge environment for VMs. + +### Solution elements + +- How to use a GitOps approach to manage virtual machines, either in public clouds (limited to AWS for technical reasons) or on-prem OpenShift installations +- How to integrate AAP into OpenShift +- How to manage Edge devices using AAP hosted in OpenShift + +### Red Hat Technologies + +- Red Hat OpenShift Container Platform (Kubernetes) +- Red Hat Ansible Automation Platform (formerly known as "Ansible Tower") +- Red Hat OpenShift GitOps (ArgoCD) +- OpenShift Virtualization (Kubevirt) +- Red Hat Enterprise Linux 8 + +### Other Technologies this Pattern Uses + +- Hashicorp Vault +- External Secrets Operator +- Inductive Automation Ignition + +## Architecture + +Similar to other patterns, this pattern starts with a central management hub, which hosts the AAP and Vault components. + +### Logical architecture + +![Ansible-Edge-Gitops-Architecture](/images/ansible-edge-gitops/ansible-edge-gitops-arch.png) + +### Physical Architecture + +![Ansible-Edge-GitOps-Physical-Architecture](/images/ansible-edge-gitops/aeg-arch-schematic.png) + +## Recorded Demo + +TBD + +## Other Presentations Featuring this Pattern + +### Registration Required + +[![Ansible-Automates-June-2022-Deck](/images/ansible-edge-gitops/automates-june-2022-deck-thumb.png)](https://tracks.redhat.com/c/validated-patterns_i?x=5wCWYS&lx=lT1ZfK) + +[![Ansible-Automates-June-2022-Video](/images/ansible-edge-gitops/automates-june-2022-video-thumb.png)](https://tracks.redhat.com/c/preview-42?x=5wCWYS&lx=lT1ZfK) + +## What Next + +- [Getting Started: Deploying and Validating the Pattern](getting-started) diff --git a/content/patterns/modern-virtualization/ansible-automation-platform.md b/content/patterns/modern-virtualization/ansible-automation-platform.md new file mode 100644 index 000000000..96c0cfc5f --- /dev/null +++ b/content/patterns/modern-virtualization/ansible-automation-platform.md @@ -0,0 +1,193 @@ +--- +title: Ansible Automation Platform +weight: 40 +aliases: /ansible-edge-gitops/ansible-automation-platform/ +--- + +# Ansible Automation Platform + +# How it's installed + +See the installation details [here](/patterns/ansible-edge-gitops/installation-details/#ansible-automation-platform-aap-formerly-known-as-ansible-tower). + +# How to Log In + +The default login user is `admin` and the password is generated randomly at install time; you will need the password to login in to the AAP interface. You do not have to log in to the interface - the pattern will configure the AAP instance; the pattern retrieves the password using the same technique as the `ansible_get_credentials.sh` script described below. If you want to inspect the AAP instance, or change any aspects of its configuration, there are two ways to login and look at it. Both mechanisms are equivalent; you get the same password to the same instance using either technique. + +## Via the OpenShift Console + +In the OpenShift console, navigate to Workloads > Secrets and select the "ansible-automation-platform" project if you want to limit the number of Secrets you can see. + +[![secrets-navigation](/images/ansible-edge-gitops/ocp-console-secrets-aap-admin-password.png)](/images/ansible-edge-gitops/ocp-console-secrets-aap-admin-password.png) + +The Secret you are looking for is in the `ansible-automation-platform` project and is named `controller-admin-password`. If you click on it, you can see the Data.password field. It is shown revealed below to show that it is the same as what is shown by the script method of retrieving it below: + +[![secrets-detail](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png)](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png) + +## Via [ansible_get_credentials.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_get_credentials.sh) + +With your KUBECONFIG set, you can run `./scripts/ansible-get-credentials.sh` from your top-level pattern directory. This will use your OpenShift cluster admin credentials to retrieve the URL for your Ansible Automation Platform instance, as well as the password for its `admin` user, which is auto-generated by the AAP operator by default. The output of the command looks like this (your password will be different): + +```text +./scripts/ansible_get_credentials.sh +[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match +'all' + +PLAY [Install manifest on AAP controller] ****************************************************************************** + +TASK [Retrieve API hostname for AAP] *********************************************************************************** +ok: [localhost] + +TASK [Set ansible_host] ************************************************************************************************ +ok: [localhost] + +TASK [Retrieve admin password for AAP] ********************************************************************************* +ok: [localhost] + +TASK [Set admin_password fact] ***************************************************************************************** +ok: [localhost] + +TASK [Report AAP Endpoint] ********************************************************************************************* +ok: [localhost] => { + "msg": "AAP Endpoint: https://controller-ansible-automation-platform.apps.mhjacks-aeg.blueprints.rhecoeng.com" +} + +TASK [Report AAP User] ************************************************************************************************* +ok: [localhost] => { + "msg": "AAP Admin User: admin" +} + +TASK [Report AAP Admin Password] *************************************************************************************** +ok: [localhost] => { + "msg": "AAP Admin Password: CKollUjlir0EfrQuRrKuOJRLSQhi4a9E" +} + +PLAY RECAP ************************************************************************************************************* +localhost : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +``` + +# Pattern AAP Configuration Details + +In this section, we describe the details of the AAP configuration we apply as part of installing the pattern. All of the configuration discussed in this section is applied by the [ansible_load_controller.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) script. + +## Loading a Manifest + +After validating that AAP is ready to be configured, the first thing the script does is to install the manifest you specify in the `values-secret.yaml` file in the `files.manifest` setting. The value of this setting is expected to be a fully-pathed file that represents a Red Hat Satellite manifest file with a valid entitlement for AAP. The *only* thing this manifest is used for is entitling AAP. + +Instructions for creating a suitable manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest). + +While it is absolutely possible to entitle AAP via a username/password on first login, the automated mechanisms for entitling only support manifests, that is the technique the pattern uses. + +## Organizations + +The pattern installs an Organization called `HMI Demo` is installed. This makes it a bit easier to separate what the pattern is doing versus the default configuration of AAP. The other resources created in AAP as part of the load process are associated with this Organization. + +## Credential Types (and their Credentials) + +### Kubeconfig (Kubeconfig) + +The Kubeconfig credential is for holding the OpenShift cluster admin kubeconfig file. This is used to query the `edge-gitops-vms` namespace for running VM instances. Since the kubeconfig is necessary for installing the pattern and must be available when the load script is running, the load script pulls it into an AAP secret and stores it for later use (and calls it `Kubeconfig`). + +The template for creating the Credential Type was taken from [here](https://blog.networktocode.com/post/kubernetes-collection-ansible/). + +### RHSMcredential (rhsm_credential) + +This credential is required to register the RHEL VMs and configure them for Kiosk mode. The registration process allows them to install packages from the Red Hat Content Delivery Network. + +### Machine (kiosk-private-key) + +This is a standard AAP Machine type credential. `kiosk-private-key` is created with the username and private key from your `values-secret.yaml` file in the `kiosk-ssh.username` and `kiosk-ssh.privatekey` fields. + +### KioskExtraParams (kiosk_container_extra_params) + +This CredentialType is considered "secret" because it includes the admin login password for the Ignition application. This passed to the provisioning playbook(s) as extra_vars. + +## Inventory + +The pattern installs an Inventory (HMI Demo), but no inventory sources. This is due to the way that OpenShift Virtualization provides access to virtual machines. The IP address associated with the SSH service that a given VM is running is associated with the Service object on the VM. This is not the way the Kubernetes inventory plugin expects to work. So to make inventory dynamic, we are instead using a play to discover VMs and add them to inventory "on the fly". What is unusual about DNS inside a Kubernetes cluster is that resources outside the namespace must use the cluster FQDN - which is `resource-name.resource-namespace.svc`. + +It is also possible to define a static inventory - an example of how this would like is preserved in the pattern repository as [hosts](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory/hosts). + +A standard dynamic inventory script is available [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory/openshift_cluster.yml). This will retrieve the object names, but it will not (currently) map the FQDN properly. Because of this limitation, we moved to using the inventory pre-play method. + +## Templates (key playbooks in the pattern) + +### [Dynamic Provision Kiosk Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/dynamic_kiosk_provision.yml) + +This combines all three key workflows in this pattern: + +* Dynamic inventory (inventory preplay) +* Kiosk Mode +* Podman Playbook + +It is safe to run multiple times on the same system. It is run on a schedule, every 10 minutes, to demonstrate this. + +### [Kiosk Mode Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/kiosk_playbook.yml) + +This playbook runs the [kiosk_mode role](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern). + +### [Podman Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/podman_playbook.yml) + +This playbook runs the [container_lifecycle role](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern) with overrides suitable for the Ignition application container. + +### [Ping Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/ping.yml) + +This playbook is for testing basic connectivity - making sure that you can reach the nodes you wish to manage, and that the credentials you have given will work on them. It will not change anything on the VMs - just gather facts from them (which requires elevating to root). + +## Schedules + +### Update Project AEG GitOps + +This job runs every 5 minutes to update the GitOps repository associated with the project. This is necessary when any of the Ansible code (for example, the playbooks or roles associated with the pattern) changes, so that the new code is available to the AAP instance. + +### Dynamic Provision Kiosk Playbook + +This job runs every 10 minutes to provision and configure any kiosks it finds to run the Ignition application in a podman container, and configure firefox in kiosk mode to display that application. The playbook is designed to be idempotent, so it is safe to run multiple times on the same targets; it will not make user-visible changes to those targets unless it must. + +This playbook combines the [inventory_preplay](/patterns/ansible-edge-gitops/ansible-automation-platform/#extra-playbooks-in-the-pattern) and the [Provision Kiosk Playbook](/patterns/ansible-edge-gitops/ansible-automation-platform/#extra-playbooks-in-the-pattern). + +## Execution Environment + +The pattern includes an execution environment definition that can be found [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible/execution_environment). + +The execution environment includes some additional collections beyond what is provided in the Default execution environment, including: + +* [fedora.linux_system_roles](https://linux-system-roles.github.io/) +* [containers.podman](https://galaxy.ansible.com/containers/podman) +* [community.okd](https://docs.ansible.com/ansible/latest/collections/community/okd/index.html) + +The execution environment definition is provided if you want to customize or change it; if so, you should also change the Execution Environment attributes of the Templates (in the [load script](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh), those attributes are set by the variables `aap_execution_environment` and `aap_execution_environment_image`). + +## Roles included in the pattern + +### [kiosk_mode](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible/roles/kiosk_mode) + +This role is responsible does the following: + +* RHEL node registration +* Installation of GUI packages +* Installation of Firefox +* Configuration of Firefox kiosk mode + +### [container_lifecycle](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible/roles/container_lifecycle) + +This role is responsible for: + +* Downloading and running a podman image on the system (and configure it to auto-update) +* Setting the container up to run at boot time +* Passing any other runtime arguments to the container. In this container's case, that includes specifying an admin password override. + +## Extra Playbooks in the Pattern + +### [inventory_preplay.yml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory_preplay.yml) + +This playbook is designed to be included in other plays; its purpose is to discover the desired inventory and add those hosts to inventory at runtime. It uses a kubernetes query via the cluster-admin kube config file. + +### [Provision Kiosk Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/provision_kiosk.yml) + +This does the work of provisioning the kiosk, which configures kiosk mode, and also installs Ignition and configures it to start at boot. It runs the [kiosk_mode](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern) and [container_lifecycle](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern) roles. + +# Next Steps + +## [Help & Feedback](https://groups.google.com/g/validatedpatterns) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/modern-virtualization/cluster-sizing.md b/content/patterns/modern-virtualization/cluster-sizing.md new file mode 100644 index 000000000..03e65d173 --- /dev/null +++ b/content/patterns/modern-virtualization/cluster-sizing.md @@ -0,0 +1,103 @@ +--- +title: Cluster Sizing +weight: 30 +aliases: /ansible-edge-gitops/cluster-sizing/ +--- +# OpenShift Cluster Sizing for the Ansible Edge GitOps Pattern + +## Tested Platforms + +The **Ansible Edge GitOps** pattern has been tested on AWS: + +| **Certified Cloud Providers** | 4.9 | 4.10 | +| :---- | :---- | :---- +| Amazon Web Services | | Tested + +The pattern is adaptable to running on bare metal/on-prem clusters but has not yet been tested there. + +## General OpenShift Minimum Requirements + +OpenShift 4 has the following minimum requirements for sizing of nodes: + +* **Minimum 4 vCPU** (additional are strongly recommended). +* **Minimum 16 GB RAM** (additional memory is strongly recommended, especially if etcd is colocated on Control Planes). +* **Minimum 40 GB** hard disk space for the file system containing /var/. +* **Minimum 1 GB** hard disk space for the file system containing /usr/local/bin/. + +There is one application that comprises the **Ansible Edge GitOps** pattern. In addition, the **Ansible Edge GitOps** pattern also includes the Advanced Cluster Management (ACM) supporting operator that is installed by **OpenShift GitOps** using ArgoCD. + +### **Ansible Edge GitOps** Pattern Components + +Here's an inventory of what gets deployed by the **Ansible Edge GitOps** pattern on the Datacenter/Hub OpenShift cluster: + +| Name | Kind | Namespace | Description +| :---- | :---- | :---- | :---- +| Ansible Edge GitOps-hub | Application | Ansible Edge GitOps-hub | Hub GitOps management +| Red Hat OpenShift GitOps | Operator | openshift-operators | OpenShift GitOps +| Red Hat Ansible Automation Platform | Operator | ansible-automation-platform | Ansible Automation +| Red Hat OpenShift Data Foundations | Operator | openshift-storage | OpenShift Storage solution +| Red Hat OpenShift Virtualization | Operator | openshift-cnv | Virtualization software to run VMs +| Edge GitOps VMs | VMs | edge-gitops-vms | Simulated Edge environment with VMs to manage +| Hashicorp Vault | Operator | vault | Secrets Storage +| External Secrets Operator (ESO) | Operator | golang-external-secrets | Abstraction for secrets storage + +### Ansible Edge GitOps Pattern OpenShift Datacenter HUB Cluster Size + +The Ansible Edge GitOps pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture. + +The Hub OpenShift Cluster is made up of the the following on the AWS deployment tested: + +| Node Type | Number of nodes | Cloud Provider | Instance Type +| :---- | :----: | :---- | :---- +| Control Plane | 3 | Amazon Web Services | m5.xlarge +| Worker | 3 | Amazon Web Services | m5.4xlarge +| Worker | 1 | Amazon Web Services | c5n.metal + +The metal node is added to the cluster by the installation process after initial provisioning. The pattern on the hub requires OpenShift Data Fabric to support Virtual Machine storage and is a **minimum** size for a Hub cluster. In the next few sections we take some snapshots of the cluster utilization while the **Ansible Edge GitOps** pattern is running. Keep in mind that resources will have to be added as more developers are working building their applications. + +#### Datacenter Cluster utilization + +Below is a snapshot of the OpenShift cluster utilization while running the **Ansible Edge GitOps** pattern: + +| CPU | CPU% | Memory | Memory% +| :----: | :-----: | :----: | :----: +321m | 0% | 12511Mi | 6% +736m | 21% | 7533Mi | 51% +673m | 4% | 9298Mi | 14% +920m | 26% | 8635Mi | 59% +673m | 4% | 9258Mi | 14% +921m | 26% | 9407Mi | 65% +395m | 2% | 5149Mi | 8% + +### AWS Instance Types + +The **Ansible Edge GitOps** pattern was tested with the highlighted AWS instances in **bold**. The OpenShift installer will let you know if the instance type meets the minimum requirements for a cluster. + +The message that the openshift installer will give you will be similar to this message + +```text +INFO Credentials loaded from default AWS environment variables +FATAL failed to fetch Metadata: failed to load asset "Install Config": [controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 4 vCPUs, controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 16384 MiB Memory] +``` + +Below you can find a list of the AWS instance types that can be used to deploy the **Ansible Edge GitOps** pattern. + +| Instance type | Default vCPUs | Memory (GiB) | Datacenter | Factory/Edge +| :------: | :-----: | :-----: | :----: | :----: +| | | | 3x3 OCP Cluster | 3 Node OCP Cluster +| m4.xlarge | 4 | 16 | N | N +| m4.2xlarge | 8 | 32 | Y | Y +| m4.4xlarge | 16 | 64 | Y | Y +| m4.10xlarge | 40 | 160 | Y | Y +| m4.16xlarge | 64 | 256 | Y | Y +| m5.xlarge | 4 | 16 | Y | N +| m5.2xlarge | 8 | 32 | Y | Y +| **m5.4xlarge** | 16 | 64 | Y | Y +| m5.8xlarge | 32 | 128 | Y | Y +| m5.12xlarge | 48 | 192 | Y | Y +| m5.16xlarge | 64 | 256 | Y | Y +| m5.24xlarge | 96 | 384 | Y | Y + +The OpenShift cluster is made of 3 Control Plane nodes and 4 Workers for the Hub cluster; 3 workers are standard compute nodes and one is c5n.metal. For the node sizes we used the **m5.4xlarge** on AWS and this instance type met the minimum requirements to deploy the **Ansible Edge GitOps** pattern successfully on the Hub cluster. + +This pattern is currently only usable on AWS because of the integration of OpenShift Virtualization; it would be straightforward to adapt this pattern also to run on bare metal/on-prem clusters. If and when other public cloud providers support metal node provisioning in OpenShift Virtualization, we will document that here. diff --git a/content/patterns/modern-virtualization/getting-started.md b/content/patterns/modern-virtualization/getting-started.md new file mode 100644 index 000000000..71852a6a7 --- /dev/null +++ b/content/patterns/modern-virtualization/getting-started.md @@ -0,0 +1,249 @@ +--- +title: Getting Started +weight: 10 +aliases: /ansible-edge-gitops/getting-started/ +--- + +# Deploying the Ansible Edge GitOps Pattern + +# General Prerequisites + +1. An OpenShift cluster ( Go to [the OpenShift console](https://console.redhat.com/openshift/create)). See also [sizing your cluster](../../ansible-edge-gitops/cluster-sizing). Currently this pattern only supports AWS. It could also run on a baremetal OpenShift cluster, because OpenShift Virtualization supports that; there would need to be some customizations made to support it as the default is AWS. We hope that GCP and Azure will support provisioning metal workers in due course so this can be a more clearly multicloud pattern. +1. A GitHub account (and, optionally, a token for it with repositories permissions, to read from and write to your forks) +1. The helm binary, see [here](https://helm.sh/docs/intro/install/) +1. Ansible, which is used in the bootstrap and provisioning phases of the pattern install (and to configure Ansible Automation Platform). +1. Please note that when run on AWS, this pattern will provision an additional worker node, which will be a metal instance (c5n.metal) to run the Edge Virtual Machines. This worker is provisioned through the OpenShift MachineAPI and will be automatically cleaned up when the cluster is destroyed. + +The use of this pattern depends on having a running Red Hat +OpenShift cluster. It is desirable to have a cluster for deploying the GitOps +management hub assets and a separate cluster(s) for the managed cluster(s). + +If you do not have a running Red Hat OpenShift cluster you can start one on a +public or private cloud by using [Red Hat's cloud +service](https://console.redhat.com/openshift/create). + +# Credentials Required in Pattern + +In addition to the openshift cluster, you will need to prepare a number of secrets, or credentials, which will be used +in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows: + +* AWS Credentials (an access key and a secret key). These are used to provision the metal worker in AWS (which hosts +the VMs). If the portworx variant of the pattern is used, these credentials will be used to modify IAM rules to allow +portworx to run correctly. + +```yaml +--- +# NEVER COMMIT THESE VALUES TO GIT +version: "2.0" +secrets: + - name: aws-creds + fields: + - name: aws_access_key_id + value: "An aws access key that can provision VMs and manage IAM (if using portworx)" + + - name: aws_secret_access_key + value: "An aws access secret key that can provision VMs and manage IAM (if using portworx)" +``` +* A username and SSH Keypair (private key and public key). These will be used to provide access to the Kiosk VMs in the demo. + +```yaml + - name: kiosk-ssh + fields: + - name: username + value: 'Username of user to attach privatekey and publickey to - cloud-user is a typical value' + + - name: privatekey + value: 'Private ssh key of the user who will be able to elevate to root to provision kiosks' + + - name: publickey + value: 'Public ssh key of the user who will be able to elevate to root to provision kiosks' +``` + +* A Red Hat Subscription Management username and password. These will be used to register Kiosk VM templates to the Red Hat Content Delivery Network and install content on the Kiosk VMs to run the demo. + +```yaml + - name: rhsm + fields: + - name: username + value: 'username of user to register RHEL VMs' + - name: password + value: 'password of rhsm user in plaintext' +``` + +* Container "extra" arguments which will set the admin password for the ignition application when it's running. + +```yaml + - name: kiosk-extra + fields: + # Default: '--privileged -e GATEWAY_ADMIN_PASSWORD=redhat' + - name: container_extra_params + value: "Optional extra params to pass to kiosk ignition container, including admin password" +``` + +* A userData block to use with cloud-init. This will allow console login as the user you specify (traditionally cloud-user) with the password you specify. The value in cloud-init is used as the default; roles in the edge-gitops-vms chart can also specify other secrets to use by referencing them in the role block. + +```yaml + - name: cloud-init + fields: + - name: userData + value: |- + #cloud-config + user: 'username of user for console, probably cloud-user' + password: 'a suitable password to use on the console' + chpasswd: { expire: False } +``` + +* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest) + +```yaml + - name: aap-manifest + fields: + - name: b64content + path: 'full pathname of file containing Satellite Manifest for entitling Ansible Automation Platform' + base64: true +``` + +# Prerequisites for deployment via `make install` + +If you are going to install via `make install` from your workstation, you will need the following tools and packages: + +{% include prerequisite-tools.md %} + +And additionally, the following ansible collections: + +* community.okd +* redhat_cop.controller_configuration +* awx.awx + +To see what collections are installed: + +`ansible-galaxy collection list` + +To install a collection that is not currently installed: + +`ansible-galaxy collection install ` + +# How to deploy + +1. Login to your cluster using oc login or exporting the KUBECONFIG + + ```sh + oc login + ``` + + or set KUBECONFIG to the path to your `kubeconfig` file. For example: + + ```sh + export KUBECONFIG=~/my-ocp-env/hub/auth/kubeconfig + ``` + +1. Fork the [ansible-edge-gitops](https://github.com/validatedpatterns/ansible-edge-gitops) repo on GitHub. It is necessary to fork to preserve customizations you make to the default configuration files. + +1. Clone the forked copy of this repository. + + ```sh + git clone git@github.com:your-username/ansible-edge-gitops.git + ``` + +1. Create a local copy of the Helm values file that can safely include credentials + + WARNING: DO NOT COMMIT THIS FILE + + You do not want to push personal credentials to GitHub. + + ```sh + cp values-secret.yaml.template ~/values-secret.yaml + vi ~/values-secret.yaml + ``` + +1. Customize the deployment for your cluster (Optional - the defaults in values-global.yaml are designed to work in AWS): + + ```sh + git checkout -b my-branch + vi values-global.yaml + git add values-global.yaml + git commit values-global.yaml + git push origin my-branch + ``` + +Please review the [Patterns quick start](/learn/quickstart/) page. This section describes deploying the pattern using `pattern.sh`. You can deploy the pattern using the [validated pattern operator](/infrastructure/using-validated-pattern-operator/). If you do use the operator then skip to Validating the Environment below. + +1. (Optional) Preview the changes. If you'd like to review what is been deployed with the pattern, `pattern.sh` provides a way to show what will be deployed. + + ```sh + ./pattern.sh make show + ``` + +1. Apply the changes to your cluster. This will install the pattern via the Validated Patterns Operator, and then run any necessary follow-up steps. + + ```sh + ./pattern.sh make install + ``` + +The installation process will take between 45-60 minutes to complete. If you want to know the details of what is happening during that time, the entire process is documented [here](/ansible-edge-gitops/installation-details/). + +# Installation Validation + +* Check the operators have been installed using the OpenShift console + + ```text + OpenShift Console Web UI -> Installed Operators + ``` + +The screen should like this when installed via `make install`: + +![ansible-edge-gitops-operators](/images/ansible-edge-gitops/aeg-new-operators.png "Ansible Edge GitOps Operators") + +* Check all applications are synchronised + +Under the project `ansible-edge-gitops-hub` click on the URL for the `hub`gitops`server`. All applications will sync, but this takes time as ODF has to completely install, and OpenShift Virtualization cannot provision VMs until the metal node has been fully provisioned and ready. Additionally, the Dynamic Provision Kiosk Template in AAP must complete; it can only start once the VMs have provisioned and are running: + +![ansible-edge-gitops-applications](/images/ansible-edge-gitops/aeg-applications.png "Ansible Edge GitOps Applications") + +* While the metal node is building, the VMs in OpenShift console will show as "Unschedulable." This is normal and expected, as the VMs themselves cannot run until the metal node completes provisioning and is ready. + +![ansible-edge-vms-unschedulable](/images/ansible-edge-gitops/aeg-vm-unschedulable.png "Ansible Edge GitOps Unschedulable VMs") + +* Under Virtualization > Virtual Machines, the virtual machines will eventually show as "Running." Once they are in "Running" state the Provisioning workflow will run on them, and install Firefox, Kiosk mode, and the Ignition application on them: + +![ansible-edge-gitops-vmlist](/images/ansible-edge-gitops/aeg-openshift-vm-screen.png "Ansible Edge GitOps VM List") + +* Finally, the VM Consoles will show the Ignition introduction screen. You can choose any of these options; this tutorial assumes you chose "Ignition": + +![ansible-edge-gitops-ignition-options](/images/ansible-edge-gitops/aeg-vm-ignition-intro.png "Ansible Edge GitOps Ignition Options") + +* You should be able to login to the application with the userid "admin" and the password you specified as the GATEWAY_ADMIN_PASSWORD in `container_extra_params` in your values-secret.yaml file. + +![ansible-edge-gitops-vmconsole](/images/ansible-edge-gitops/aeg-openshift-vm-console.png "Ansible Edge GitOps VM Console") + +Please see [Installation Details](/ansible-edge-gitops/installation-details/) for more information on the steps of installation. + +Please see [Ansible Automation Platform](/ansible-edge-gitops/ansible-automation-platform/) for more information on how this pattern uses the Ansible Automation Platform Operator for OpenShift. + +Please see [OpenShift Virtualization](/ansible-edge-gitops/openshift-virtualization/) for more information on how this pattern uses OpenShift Virtualization. + +# Infrastructure Elements of this Pattern + +## [Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible) + +A fully functional installation of the Ansible Automation Platform operator is installed on your OpenShift cluster to configure and maintain the VMs for this demo. AAP maintains a dynamic inventory of kiosk machines and can configure a VM from template to fully functional kiosk in about 10 minutes. + +## OpenShift [Virtualization](https://docs.openshift.com/container-platform/4.10/virt/about-virt.html) + +OpenShift Virtualization is a Kubernetes-native way to run virtual machine workloads. It is used in this pattern to host VMs simulating an Edge environment; the chart that configures the VMs is designed to be flexible to allow easy customization to model different VM sizes, mixes, versions and profiles for future pattern development. + +## Inductive Automation [Ignition](https://inductiveautomation.com/) + +The goal of this pattern is to configure 2 VMs running Firefox in Kiosk mode displaying the demo version of the Ignition application running in a podman container. Ignition is a popular tool in use with Oil and Gas companies; it is included as a real-world example and as an item to spark imagination about what other applications could be installed and managed this way. + +The container used for this pattern is the container [image](https://hub.docker.com/r/inductiveautomation/ignition) published by Inductive Automation. + +## HashiCorp [Vault](https://www.vaultproject.io/) + +Vault is used as the authoritative source for the Kiosk ssh pubkey via the External Secrets Operator. +As part of this pattern HashiCorp Vault has been installed. Refer to the section on [Vault](https://validatedpatterns.io/secrets/vault/). + +# Next Steps + +## [Help & Feedback](https://groups.google.com/g/validatedpatterns) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/modern-virtualization/ideas-for-customization.md b/content/patterns/modern-virtualization/ideas-for-customization.md new file mode 100644 index 000000000..f973e8b2e --- /dev/null +++ b/content/patterns/modern-virtualization/ideas-for-customization.md @@ -0,0 +1,268 @@ +--- +title: Ideas for Customization +weight: 60 +aliases: /ansible-edge-gitops/ideas-for-customization/ +--- + +# Ideas for Customization + +# Why change it? + +One of the major goals of the Red Hat patterns development process is to create modular, customizable demos. Maybe you are not interested in Ignition as an application, or you do not have kiosks...but you do have other use cases that involve running containers on edge devices. Maybe you want to experiment with different releases of RHEL, or you want to do something different with Ansible Automation Platform. + +This demo in particular can be customized in a number of ways that might be very interesting - and here are some starter ideas with some instructions on exactly what and where changes would need to be made in the pattern to accommodate those changes. + +# HOWTO define your own VM sets using the chart + +1. Either fork the repo or copy the edge-gitops-vms chart out of it. + +1. Customize the [values.yaml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/values.yaml) file + +The `vms` data structure is designed to support multiple groups and types of VMs. The `kiosk` example defines all of the variables currently supported by the chart, including references to the Vault instance and port definitions. If, for example, you wanted to replace kiosk with new iotsensor and iotgateway types, the whole file might look like this: + +```yaml +--- +secretStore: + name: vault-backend + kind: ClusterSecretStore + +cloudInit: + defaultUser: 'cloud-user' + defaultPassword: '6toh-n1d5-9xpq' + +vms: + iotsensor: + count: 4 + flavor: small + workload: server + os: rhel8 + role: iotgateway + storage: 20Gi + memory: 2Gi + cores: 1 + sockets: 1 + threads: 1 + cloudInitUser: cloud-user + cloudInitPassword: 6toh-n1d5-9xpq + template: rhel8-server-small + sshsecret: secret/data/hub/iotsensor-ssh + sshpubkeyfield: publickey + ports: + - name: ssh + port: 22 + protocol: TCP + targetPort: 22 + iotgateway: + count: 1 + flavor: medium + workload: server + os: rhel8 + role: iotgateway + storage: 30Gi + memory: 4Gi + cores: 1 + sockets: 1 + threads: 1 + cloudInitUser: cloud-user + cloudInitPassword: 6toh-n1d5-9xpq + template: rhel8-server-medium + sshsecret: secret/data/hub/iotgateway-ssh + sshpubkeyfield: publickey + ports: + - name: ssh + port: 22 + protocol: TCP + targetPort: 22 + - name: mqtt + port: 1883 + protocol: TCP + targetPort: 1883 +``` + +This would create 1 iotgateway VM and 4 iotsensor VMs. Adjustments would also need to be made in [values-secret](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-secret.yaml.template) and [ansible-load-controller](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) to add the iotgateway-ssh and iotsensor-ssh data structures. + +# HOWTO define your own VM sets "from scratch" + +1. Pick a default template from the standard OpenShift Virtualization template library in the `openshift` namespace. For this pattern, we used `rhel8-desktop-medium`: + +```text +$ oc get template -n openshift rhel8-desktop-medium +NAME DESCRIPTION PARAMETERS OBJECTS +rhel8-desktop-medium Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +``` + +1. It might help to create a VM through the command line template process, and see what objects OpenShift Virtualization creates to bring that VM up: + +To see the actual JSON that the template converts into: + +```text +$ oc process -n openshift rhel8-desktop-medium +{ + "kind": "List", + "apiVersion": "v1", + "metadata": {}, + "items": [ + { + "apiVersion": "kubevirt.io/v1", + "kind": "VirtualMachine", + "metadata": { + "annotations": { + "vm.kubevirt.io/validations": "[\n {\n \"name\": \"minimal-required-memory\",\n \"path\": \"jsonpath::.spec.domain.resources.requests.memory\",\n \"rule\": \"integer\",\n \"message\": \"This VM requires more memory.\",\n \"min\": 1610612736\n }\n]\n" + }, + "labels": { + "app": "rhel8-yywa22lijw8hl017", + "vm.kubevirt.io/template": "rhel8-desktop-medium", + "vm.kubevirt.io/template.revision": "1", + "vm.kubevirt.io/template.version": "v0.19.5" + }, + "name": "rhel8-yywa22lijw8hl017" + }, + "spec": { + "dataVolumeTemplates": [ + { + "apiVersion": "cdi.kubevirt.io/v1beta1", + "kind": "DataVolume", + "metadata": { + "name": "rhel8-yywa22lijw8hl017" + }, + "spec": { + "sourceRef": { + "kind": "DataSource", + "name": "rhel8", + "namespace": "openshift-virtualization-os-images" + }, + "storage": { + "resources": { + "requests": { + "storage": "30Gi" + } + } + } + } + } + ], + "running": false, + "template": { + "metadata": { + "annotations": { + "vm.kubevirt.io/flavor": "medium", + "vm.kubevirt.io/os": "rhel8", + "vm.kubevirt.io/workload": "desktop" + }, + "labels": { + "kubevirt.io/domain": "rhel8-yywa22lijw8hl017", + "kubevirt.io/size": "medium" + } + }, + "spec": { + "domain": { + "cpu": { + "cores": 1, + "sockets": 1, + "threads": 1 + }, + "devices": { + "disks": [ + { + "disk": { + "bus": "virtio" + }, + "name": "rhel8-yywa22lijw8hl017" + }, + { + "disk": { + "bus": "virtio" + }, + "name": "cloudinitdisk" + } + ], + "inputs": [ + { + "bus": "virtio", + "name": "tablet", + "type": "tablet" + } + ], + "interfaces": [ + { + "masquerade": {}, + "name": "default" + } + ], + "networkInterfaceMultiqueue": true, + "rng": {} + }, + "machine": { + "type": "pc-q35-rhel8.4.0" + }, + "resources": { + "requests": { + "memory": "4Gi" + } + } + }, + "evictionStrategy": "LiveMigrate", + "networks": [ + { + "name": "default", + "pod": {} + } + ], + "terminationGracePeriodSeconds": 180, + "volumes": [ + { + "dataVolume": { + "name": "rhel8-yywa22lijw8hl017" + }, + "name": "rhel8-yywa22lijw8hl017" + }, + { + "cloudInitNoCloud": { + "userData": "#cloud-config\nuser: cloud-user\npassword: nnpa-12td-e0r7\nchpasswd: { expire: False }" + }, + "name": "cloudinitdisk" + } + ] + } + } + } + } + ] +} +``` + +And to use the template to create a VM: + +```shell +oc process -n openshift rhel8-desktop-medium | oc apply -f - +virtualmachine.kubevirt.io/rhel8-q63yuvxpjdvy18l7 created +``` + +In just a few minutes, you will have a blank rhel8 VM running, which you can then login to (via console) and customize. + +1. Get the details of this template as a local YAML file: + +```shell +oc get template -n openshift rhel8-desktop-medium -o yaml > my-template.yaml +``` + +Once you have this local template, you can view the elements you want to customize, possibly using [this](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) as an example. + +# HOWTO Define your own Ansible Controller Configuration + +The [ansible_load_controller.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) is designed to be relatively easy to customize with a new controller configuration. Structurally, it is principally based on [configure_controller.yml](https://github.com/redhat-cop/controller_configuration/blob/devel/playbooks/configure_controller.yml) from the Red Hat Community of Practice [controller_configuration](https://github.com/redhat-cop/controller_configuration) collection. The order and specific list of roles invoked is taken from there. + +To customize it, the main thing would be to replace the different variables in the role tasks with the your own. The script includes the roles for variable types that this pattern does not manage in order to make that part straightforward. Feel free to add your own roles and playbooks (and add them to the controller configuration script). + +The reason this pattern ships with a script as it does instead of invoking the referenced playbook directly is that several of the configuration elements depend on each other, and there was not a super-convenient place to put things like the controller credentials as the playbook suggests. + +# HOWTO substitute your own container application (instead of ignition) + +1. Adjust the query in the [inventory_preplay.yml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory_preplay.yml) either by overriding the vars for the play, or forking the repo and replacing the vars with your own query terms. (That is, use your own label(s) and namespace to discover the services you want to connect to. + +1. Adjust or override the vars in the [provision_kiosk.yml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/provision_kiosk.yml) playbook to suitable values for your own container application. The roles it calls are fairly generic, so changing the vars is all you should need to do. + +# Next Steps + +## [Help & Feedback](https://groups.google.com/g/validatedpatterns) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/modern-virtualization/installation-details.md b/content/patterns/modern-virtualization/installation-details.md new file mode 100644 index 000000000..792629e37 --- /dev/null +++ b/content/patterns/modern-virtualization/installation-details.md @@ -0,0 +1,133 @@ +--- +title: Installation Details +weight: 20 +aliases: /ansible-edge-gitops/installation-details/ +--- + +# Installation Details + +# Installation Steps + +These are the steps run by [make install](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) and what each one does: + +## [operator-deploy](https://github.com/validatedpatterns/common/blob/main/Makefile) + +The operator-deploy task installs the Validated Patterns Operator, which in turn creates a subscription for the OpenShift GitOps operator and installs both the cluster and hub instances of it. The clustergroup application will then read the values-global.yaml and values-hub.yaml files for other subscriptions and applications to install. + +The [legacy-install](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) is still provided for users that cannot or do not want to use the Validated Patterns operator. Instead of installing the operator, it installs a helm chart that does the same thing - installs a subscription for OpenShift GitOps and installs a cluster-wide and hub instance of that operator. It then proceeds with installing the clustergroup application. + +Note that both the [upgrade](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) and [legacy-upgrade](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) targets are now equivalent and interchangeable with `install` and `legacy-install` (respectively - `legacy-install/legacy-upgrade` are not compatible with standard `install/upgrade`. This was not always the case, so both install/upgrade targets are still provided). + +### Imperative section + +Part of the operator-deploy process is creating and running the [imperative](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-hub.yaml) tools as defined in the hub values file. In this pattern, that includes running the playbook to deploy the metal worker. + +The real code for this playbook (outside of a shell wrapper) is [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/deploy_kubevirt_worker.yml). + +This script is another Ansible playbook that deploys a node to run the Virtual Machines for the demo. The playbook uses the OpenShift machineset API to provision the node in the first availability zone it finds. Currently, AWS is the only major public cloud provider that offers the deployment of a metal node through the normal provisioning process. We hope that Azure and GCP will support this functionality soon as well. + +Please be aware that the metal node is rather more expensive in compute costs than most other AWS machine types. The trade-off is that running the demo without hardware acceleration would take ~4x as long. + +It takes about 20-30 minutes for the metal node to become available to run VMs. If you would like to see the current status of the metal node, you can check it this way (assuming your kubeconfig is currently set up to point to your cluster): + +```shell +oc get -A machineset +``` + +You will be looking for a machineset with `metal-worker` in its name: + +```text +NAMESPACE NAME DESIRED CURRENT READY AVAILABLE AGE +openshift-machine-api mhjacks-aeg-qx25w-metal-worker-us-west-2a 1 1 1 1 19m +openshift-machine-api mhjacks-aeg-qx25w-worker-us-west-2a 1 1 1 1 47m +openshift-machine-api mhjacks-aeg-qx25w-worker-us-west-2b 1 1 1 1 47m +openshift-machine-api mhjacks-aeg-qx25w-worker-us-west-2c 1 1 1 1 47m +openshift-machine-api mhjacks-aeg-qx25w-worker-us-west-2d 0 0 47m +``` + +When the `metal-worker` is showing "READY" and "AVAILABLE", the virtual machines will begin provisioning on it. + +The metal node will be destroyed when the cluster is destroyed. The script is idempotent and will create at most one metal node per cluster. + +## [post-install](https://github.com/validatedpatterns/common/blob/main/Makefile) + +Note that all the steps of `post-install` are idempotent. If you want or need to reconfigure vault or AAP, the recommended way to do so is to call `make post-install`. This may change as we move elements of this pattern into the new imperative framework in `common`. + +Specific processes that are called by post-install include: + +### [vault-init](https://github.com/validatedpatterns/common/blob/main/scripts/vault-utils.sh) + +Vault requires extra setup in the form of unseal keys and configuration of secrets. The vault-init task does this. Note that it is safe to run vault-init as it will exit successfully if it can connect to a cluster with a running, unsealed vault. + +### [load-secrets](https://github.com/validatedpatterns/common/blob/main/scripts/vault-utils.sh) + +This process (which calls push_secrets) calls an Ansible playbook that reads the values-secret.yaml file and stores the data it finds there in vault as keypairs. These values are then usable in the kubernetes cluster. This pattern uses the ssh pubkey for the kiosk VMs via the external secrets operator. + +This script will update secrets in vault if re-run; it is safe to re-run if the secret values have not changed as well. + +### [configure-controller](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) + +There are two parts to this script - the first part, with the code [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/ansible_get_credentials.yml), retrieves the admin credentials from OpenShift to enable login to the AAP Controller. + +The second part, which is the bulk of the ansible-load-controller process is [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/ansible_configure_controller.yml) and uses the [controller configuration](https://github.com/redhat-cop/controller_configuration) framework to configure the Ansible Automation Platform instance that is installed by the helm chart. + +This division is so that users can adapt this pattern more easily if they're running AAP, but not on OpenShift. + +The script waits until AAP is ready, and then proceeds to: + +1. Install the manifest to entitle AAP +1. Configure the custom Credential Types the demo needs +1. Define an Organization for the Demo +1. Add a Project for the Demo +1. Add the Credentials for jobs to use +1. Configure Host inventory and inventory sources, and smart inventories to define target hosts +1. Configure an Execution environment for the Demo +1. Configure Job Templates for the Demo +1. Configure Schedules for the jobs that need to repeat + +*Note:* This script has defaults that it overrides when run as part of `make install` that it derives from the environment (the repo that it is attached to and the branch that it is on). So if you need to re-run it, the most straightforward way to do this is to run `make upgrade` when using the make-based installation process. + +# OpenShift GitOps (ArgoCD) + +OpenShift GitOps is central to this pattern as it is responsible for installing all of the other components. The installation process is driven through the installation of the [clustergroup](https://github.com/validatedpatterns/common/tree/main/clustergroup) chart. This in turn reads the repo's [global values file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-global.yaml), which instructs it to read the [hub values file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-hub.yaml). This is how the pattern knows to apply the Subscriptions and Applications listed further in the pattern. + +# ODF (OpenShift Data Foundations) + +ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/getting-started/) for details on the Medical Edge pattern's use of storage). + +Please note that this chart will create a Noobaa S3 bucket named nb.epoch_timestamp.cluster-domain which will not be destroyed when the cluster is destroyed. + +# OpenShift Virtualization (KubeVirt) + +OpenShift Virtualization is a framework for running virtual machines as native Kubernetes resources. While it can run without hardware acceleration, the performance of virtual machines will suffer terribly; some testing on a similar workload indicated a 4-6x delay running without hardware acceleration, so at present this pattern requires hardware acceleration. The pattern provides a script [deploy-kubevirt-worker.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/deploy_kubevirt_worker.sh) which will provision a metal worker to run virtual machines for the pattern. + +OpenShift Virtualization currently supports only AWS and on-prem clusters; this is because of the way that baremetal resources are provisioned in GCP and Azure. We hope that OpenShift Virtualization can support GCP and Azure soon. + +The installation of the OpenShift Virtualization HyperConverged deployment is controlled by the chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/cnv). + +OpenShift Virtualization was chosen in this pattern to avoid dealing with the differences in galleries and templates of images between the different public cloud providers. The important thing from this pattern's standpoint is the availability of machine instances to manage (since we are simulating an Edge deployment scenario, which could either be bare metal instances or virtual machines); OpenShift Virtualization was the easiest and most portable way to spin up machine instances. It also provides mechanisms for defining the desired machine set declaratively. + +The creation of virtual machines is controlled by the chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/edge-gitops-vms). + +More details about the way we use OpenShift Virtualization are available [here](/ansible-edge-gitops/openshift-virtualization). + +# Ansible Automation Platform (AAP, formerly known as Ansible Tower) + +The use of Ansible Automation Platform is really the centerpiece of this pattern. We have recognized for some time that the notion and design principles of GitOps should apply to things outside of Kubernetes, and we believe this pattern +gives us a way to do that. + +All of the Ansible interactions are defined in a Git Repository; the Ansible jobs that configure the VMs are designed +to be idempotent (and are scheduled to run every 10 minutes on those VMs). + +The installation of AAP itself is governed by the chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/ansible-automation-platform). The post-installation configuration of AAP is done via the [ansible-load-controller.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) script. + +It is very much the intention of this pattern to make it easy to replace the specific Edge management use case with another one. Some ideas on how to do that can be found [here](/ansible-edge-gitops/ideas-for-customization/). + +Specifics of the Ansible content for this pattern can be seen [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible). + +More details of the specifics of how AAP is configured are available [here](/ansible-edge-gitops/ansible-automation-platform/). + +# Next Steps + +## [Help & Feedback](https://groups.google.com/g/validatedpatterns) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/modern-virtualization/openshift-virtualization.md b/content/patterns/modern-virtualization/openshift-virtualization.md new file mode 100644 index 000000000..ea0b1af40 --- /dev/null +++ b/content/patterns/modern-virtualization/openshift-virtualization.md @@ -0,0 +1,357 @@ +--- +title: OpenShift Virtualization +weight: 50 +aliases: /ansible-edge-gitops/openshift-virtualization/ +--- + +# OpenShift Virtualization + +# Understanding the Edge GitOps VMs [Helm Chart](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/edge-gitops-vms) + +The heart of the Edge GitOps VMs helm chart is a [template file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) that was designed with a fair amount of flexibility in mind. Specifically, it allows you to specify: + +1. One or more "groups" of VMs (such as "kiosk" in our example) with an arbitrary number of instances per group +1. Different sizing parameters (cores, threads, memory, disk size) for each group +1. Different SSH keypair credentials for each group +1. Different OS's for each group +1. Different sets of TCP and/or UDP ports open for each group + +This is to allow you to set up, for example, 4 VMs of one type, 3 VMs of another, and 2 VMs of a third type. This will hopefully abstract the details of VM creation through OpenShift Virtualization and allow you to focus on what kinds and how many of the different sorts of VMs you might need to set up. (Note that AWS's smallest metal node is 72 cores and 192 GB of RAM at initial release, so there is plenty of room for different combinations/configurations.) + +## How we got here - Default OpenShift Virtualization templates + +OpenShift virtualization expects to install virtual machines from image templates by default, and provides a number of OpenShift templates to facilitate this. The default templates are installed in the `openshift` namespace; the OpenShift console also provides a wizard for creating VMs that use the same templates. + +As of OpenShift Virtualization 4.10.1, the following templates were available on installation: + +```text +$ oc get template + +NAME DESCRIPTION PARAMETERS OBJECTS +3scale-gateway 3scale's APIcast is an NGINX based API gateway used to integrate your interna... 17 (8 blank) 3 +amq63-basic Application template for JBoss A-MQ brokers. These can be deployed as standal... 11 (4 blank) 6 +amq63-persistent An example JBoss A-MQ application. For more information about using this temp... 13 (4 blank) 8 +amq63-persistent-ssl An example JBoss A-MQ application. For more information about using this temp... 18 (6 blank) 12 +amq63-ssl An example JBoss A-MQ application. For more information about using this temp... 16 (6 blank) 10 +apicurito Design beautiful, functional APIs with zero coding, using a visual designer f... 7 (1 blank) 7 +cache-service Red Hat Data Grid is an in-memory, distributed key/value store. 8 (1 blank) 4 +cakephp-mysql-example An example CakePHP application with a MySQL database. For more information ab... 21 (4 blank) 8 +cakephp-mysql-persistent An example CakePHP application with a MySQL database. For more information ab... 22 (4 blank) 9 +centos-stream8-desktop-large Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-desktop-medium Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-desktop-small Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-desktop-tiny Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-server-large Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-server-medium Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-server-small Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream8-server-tiny Template for CentOS Stream 8 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-desktop-large Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-desktop-medium Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-desktop-small Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-desktop-tiny Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-server-large Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-server-medium Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-server-small Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos-stream9-server-tiny Template for CentOS Stream 9 VM or newer. A PVC with the CentOS Stream disk i... 4 (2 generated) 1 +centos7-desktop-large Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-desktop-medium Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-desktop-small Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-desktop-tiny Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-server-large Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-server-medium Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-server-small Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +centos7-server-tiny Template for CentOS 7 VM or newer. A PVC with the CentOS disk image must be a... 4 (2 generated) 1 +dancer-mysql-example An example Dancer application with a MySQL database. For more information abo... 18 (5 blank) 8 +dancer-mysql-persistent An example Dancer application with a MySQL database. For more information abo... 19 (5 blank) 9 +datagrid-service Red Hat Data Grid is an in-memory, distributed key/value store. 7 (1 blank) 4 +datavirt64-basic-s2i Application template for JBoss Data Virtualization 6.4 services built using S2I. 20 (6 blank) 6 +datavirt64-extensions-support-s2i An example JBoss Data Virtualization application. For more information about... 35 (9 blank) 10 +datavirt64-ldap-s2i Application template for JBoss Data Virtualization 6.4 services that configur... 21 (6 blank) 6 +datavirt64-secure-s2i An example JBoss Data Virtualization application. For more information about... 51 (22 blank) 8 +decisionserver64-amq-s2i An example BRMS decision server A-MQ application. For more information about... 30 (5 blank) 10 +decisionserver64-basic-s2i Application template for Red Hat JBoss BRMS 6.4 decision server applications... 17 (5 blank) 5 +django-psql-example An example Django application with a PostgreSQL database. For more informatio... 19 (5 blank) 8 +django-psql-persistent An example Django application with a PostgreSQL database. For more informatio... 20 (5 blank) 9 +eap-xp3-basic-s2i Example of an application based on JBoss EAP XP. For more information about u... 20 (5 blank) 8 +eap74-basic-s2i An example JBoss Enterprise Application Platform application. For more inform... 20 (5 blank) 8 +eap74-https-s2i An example JBoss Enterprise Application Platform application configured with... 30 (11 blank) 10 +eap74-sso-s2i An example JBoss Enterprise Application Platform application Single Sign-On a... 50 (21 blank) 10 +fedora-desktop-large Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-desktop-medium Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-desktop-small Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-desktop-tiny Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-highperformance-large Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-highperformance-medium Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-highperformance-small Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-highperformance-tiny Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-server-large Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-server-medium Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-server-small Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fedora-server-tiny Template for Fedora 34 VM or newer. A PVC with the Fedora disk image must be... 4 (2 generated) 1 +fuse710-console The Red Hat Fuse Console eases the discovery and management of Fuse applicati... 8 (1 blank) 5 +httpd-example An example Apache HTTP Server (httpd) application that serves static content.... 9 (3 blank) 5 +jenkins-ephemeral Jenkins service, without persistent storage.... 11 (all set) 7 +jenkins-ephemeral-monitored Jenkins service, without persistent storage. ... 12 (all set) 8 +jenkins-persistent Jenkins service, with persistent storage.... 13 (all set) 8 +jenkins-persistent-monitored Jenkins service, with persistent storage. ... 14 (all set) 9 +jws31-tomcat7-basic-s2i Application template for JWS applications built using S2I. 12 (3 blank) 5 +jws31-tomcat7-https-s2i An example JBoss Web Server application configured for use with https. For mo... 17 (5 blank) 7 +jws31-tomcat8-basic-s2i An example JBoss Web Server application. For more information about using thi... 12 (3 blank) 5 +jws31-tomcat8-https-s2i An example JBoss Web Server application. For more information about using thi... 17 (5 blank) 7 +jws56-openjdk11-tomcat9-ubi8-basic-s2i An example JBoss Web Server application. For more information about using thi... 10 (3 blank) 5 +jws56-openjdk11-tomcat9-ubi8-https-s2i An example JBoss Web Server application. For more information about using thi... 15 (5 blank) 7 +jws56-openjdk8-tomcat9-ubi8-basic-s2i An example JBoss Web Server application. For more information about using thi... 10 (3 blank) 5 +jws56-openjdk8-tomcat9-ubi8-https-s2i An example JBoss Web Server application. For more information about using thi... 15 (5 blank) 7 +mariadb-ephemeral MariaDB database service, without persistent storage. For more information ab... 8 (3 generated) 3 +mariadb-persistent MariaDB database service, with persistent storage. For more information about... 9 (3 generated) 4 +mysql-ephemeral MySQL database service, without persistent storage. For more information abou... 8 (3 generated) 3 +mysql-persistent MySQL database service, with persistent storage. For more information about u... 9 (3 generated) 4 +nginx-example An example Nginx HTTP server and a reverse proxy (nginx) application that ser... 10 (3 blank) 5 +nodejs-postgresql-example An example Node.js application with a PostgreSQL database. For more informati... 18 (4 blank) 8 +nodejs-postgresql-persistent An example Node.js application with a PostgreSQL database. For more informati... 19 (4 blank) 9 +openjdk-web-basic-s2i An example Java application using OpenJDK. For more information about using t... 9 (1 blank) 5 +postgresql-ephemeral PostgreSQL database service, without persistent storage. For more information... 7 (2 generated) 3 +postgresql-persistent PostgreSQL database service, with persistent storage. For more information ab... 8 (2 generated) 4 +processserver64-amq-mysql-persistent-s2i An example BPM Suite application with A-MQ and a MySQL database. For more inf... 49 (13 blank) 14 +processserver64-amq-mysql-s2i An example BPM Suite application with A-MQ and a MySQL database. For more inf... 47 (13 blank) 12 +processserver64-amq-postgresql-persistent-s2i An example BPM Suite application with A-MQ and a PostgreSQL database. For mor... 46 (10 blank) 14 +processserver64-amq-postgresql-s2i An example BPM Suite application with A-MQ and a PostgreSQL database. For mor... 44 (10 blank) 12 +processserver64-basic-s2i An example BPM Suite application. For more information about using this templ... 17 (5 blank) 5 +processserver64-externaldb-s2i An example BPM Suite application with a external database. For more informati... 47 (22 blank) 7 +processserver64-mysql-persistent-s2i An example BPM Suite application with a MySQL database. For more information... 40 (14 blank) 10 +processserver64-mysql-s2i An example BPM Suite application with a MySQL database. For more information... 39 (14 blank) 9 +processserver64-postgresql-persistent-s2i An example BPM Suite application with a PostgreSQL database. For more informa... 37 (11 blank) 10 +rails-pgsql-persistent An example Rails application with a PostgreSQL database. For more information... 21 (4 blank) 9 +rails-postgresql-example An example Rails application with a PostgreSQL database. For more information... 20 (4 blank) 8 +redis-ephemeral Redis in-memory data structure store, without persistent storage. For more in... 5 (1 generated) 3 +redis-persistent Redis in-memory data structure store, with persistent storage. For more infor... 6 (1 generated) 4 +rhdm711-authoring Application template for a non-HA persistent authoring environment, for Red H... 76 (46 blank) 11 +rhdm711-authoring-ha Application template for a HA persistent authoring environment, for Red Hat D... 92 (47 blank) 17 +rhdm711-kieserver Application template for a managed KIE Server, for Red Hat Decision Manager 7... 61 (42 blank) 6 +rhdm711-prod-immutable-kieserver Application template for an immutable KIE Server in a production environment,... 66 (45 blank) 8 +rhdm711-prod-immutable-kieserver-amq Application template for an immutable KIE Server in a production environment... 80 (54 blank) 20 +rhdm711-trial-ephemeral Application template for an ephemeral authoring and testing environment, for... 63 (40 blank) 8 +rhel6-desktop-large Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-desktop-medium Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-desktop-small Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-desktop-tiny Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-server-large Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-server-medium Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-server-small Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel6-server-tiny Template for Red Hat Enterprise Linux 6 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-desktop-large Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-desktop-medium Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-desktop-small Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-desktop-tiny Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-highperformance-large Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-highperformance-medium Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-highperformance-small Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-highperformance-tiny Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-server-large Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-server-medium Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-server-small Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel7-server-tiny Template for Red Hat Enterprise Linux 7 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-desktop-large Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-desktop-medium Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-desktop-small Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-desktop-tiny Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-highperformance-large Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-highperformance-medium Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-highperformance-small Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-highperformance-tiny Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-server-large Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-server-medium Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-server-small Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel8-server-tiny Template for Red Hat Enterprise Linux 8 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-desktop-large Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-desktop-medium Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-desktop-small Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-desktop-tiny Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-highperformance-large Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-highperformance-medium Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-highperformance-small Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-highperformance-tiny Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-server-large Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-server-medium Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-server-small Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhel9-server-tiny Template for Red Hat Enterprise Linux 9 VM or newer. A PVC with the RHEL disk... 4 (2 generated) 1 +rhpam711-authoring Application template for a non-HA persistent authoring environment, for Red H... 80 (46 blank) 12 +rhpam711-authoring-ha Application template for a HA persistent authoring environment, for Red Hat P... 101 (47 blank) 20 +rhpam711-kieserver-externaldb Application template for a managed KIE Server with an external database, for... 83 (59 blank) 8 +rhpam711-kieserver-mysql Application template for a managed KIE Server with a MySQL database, for Red... 70 (42 blank) 9 +rhpam711-kieserver-postgresql Application template for a managed KIE Server with a PostgreSQL database, for... 71 (42 blank) 9 +rhpam711-managed Application template for a managed HA production runtime environment, for Red... 87 (46 blank) 14 +rhpam711-prod Application template for a managed HA production runtime environment, for Red... 102 (55 blank) 28 +rhpam711-prod-immutable-kieserver Application template for an immutable KIE Server in a production environment,... 76 (45 blank) 11 +rhpam711-prod-immutable-kieserver-amq Application template for an immutable KIE Server in a production environment... 97 (58 blank) 23 +rhpam711-prod-immutable-monitor Application template for a router and monitoring console in a production envi... 66 (44 blank) 14 +rhpam711-trial-ephemeral Application template for an ephemeral authoring and testing environment, for... 63 (40 blank) 8 +s2i-fuse710-spring-boot-2-camel Spring Boot 2 and Camel QuickStart. This example demonstrates how you can use... 18 (3 blank) 3 +s2i-fuse710-spring-boot-2-camel-rest-3scale Spring Boot 2, Camel REST DSL and 3Scale QuickStart. This example demonstrate... 19 (3 blank) 5 +s2i-fuse710-spring-boot-2-camel-xml Spring Boot 2 and Camel Xml QuickStart. This example demonstrates how you can... 18 (3 blank) 3 +sso72-https An example RH-SSO 7 application. For more information about using this templa... 26 (15 blank) 6 +sso72-mysql An example RH-SSO 7 application with a MySQL database. For more information a... 36 (20 blank) 8 +sso72-mysql-persistent An example RH-SSO 7 application with a MySQL database. For more information a... 37 (20 blank) 9 +sso72-postgresql An example RH-SSO 7 application with a PostgreSQL database. For more informat... 33 (17 blank) 8 +sso72-postgresql-persistent An example RH-SSO 7 application with a PostgreSQL database. For more informat... 34 (17 blank) 9 +sso73-https An example application based on RH-SSO 7.3 image. For more information about... 27 (16 blank) 6 +sso73-mysql An example application based on RH-SSO 7.3 image. For more information about... 37 (21 blank) 8 +sso73-mysql-persistent An example application based on RH-SSO 7.3 image. For more information about... 38 (21 blank) 9 +sso73-ocp4-x509-https An example application based on RH-SSO 7.3 image. For more information about... 13 (7 blank) 5 +sso73-ocp4-x509-mysql-persistent An example application based on RH-SSO 7.3 image. For more information about... 24 (12 blank) 8 +sso73-ocp4-x509-postgresql-persistent An example application based on RH-SSO 7.3 image. For more information about... 21 (9 blank) 8 +sso73-postgresql An example application based on RH-SSO 7.3 image. For more information about... 34 (18 blank) 8 +sso73-postgresql-persistent An example application based on RH-SSO 7.3 image. For more information about... 35 (18 blank) 9 +sso74-https An example application based on RH-SSO 7.4 on OpenJDK image. For more informa... 27 (16 blank) 6 +sso74-ocp4-x509-https An example application based on RH-SSO 7.4 on OpenJDK image. For more informa... 13 (7 blank) 5 +sso74-ocp4-x509-postgresql-persistent An example application based on RH-SSO 7.4 on OpenJDK image. For more informa... 21 (9 blank) 8 +sso74-postgresql An example application based on RH-SSO 7.4 on OpenJDK image. For more informa... 34 (18 blank) 8 +sso74-postgresql-persistent An example application based on RH-SSO 7.4 on OpenJDK image. For more informa... 35 (18 blank) 9 +sso75-https An example application based on RH-SSO 7.5 on OpenJDK image. For more informa... 27 (16 blank) 6 +sso75-ocp4-x509-https An example application based on RH-SSO 7.5 on OpenJDK image. For more informa... 13 (7 blank) 5 +sso75-ocp4-x509-postgresql-persistent An example application based on RH-SSO 7.5 on OpenJDK image. For more informa... 21 (9 blank) 8 +sso75-postgresql An example application based on RH-SSO 7.5 on OpenJDK image. For more informa... 34 (18 blank) 8 +sso75-postgresql-persistent An example application based on RH-SSO 7.5 on OpenJDK image. For more informa... 35 (18 blank) 9 +windows10-desktop-large Template for Microsoft Windows 10 VM. A PVC with the Windows disk image must... 3 (1 generated) 1 +windows10-desktop-medium Template for Microsoft Windows 10 VM. A PVC with the Windows disk image must... 3 (1 generated) 1 +windows10-highperformance-large Template for Microsoft Windows 10 VM. A PVC with the Windows disk image must... 3 (1 generated) 1 +windows10-highperformance-medium Template for Microsoft Windows 10 VM. A PVC with the Windows disk image must... 3 (1 generated) 1 +windows2k12r2-highperformance-large Template for Microsoft Windows Server 2012 R2 VM. A PVC with the Windows disk... 3 (1 generated) 1 +windows2k12r2-highperformance-medium Template for Microsoft Windows Server 2012 R2 VM. A PVC with the Windows disk... 3 (1 generated) 1 +windows2k12r2-server-large Template for Microsoft Windows Server 2012 R2 VM. A PVC with the Windows disk... 3 (1 generated) 1 +windows2k12r2-server-medium Template for Microsoft Windows Server 2012 R2 VM. A PVC with the Windows disk... 3 (1 generated) 1 +windows2k16-highperformance-large Template for Microsoft Windows Server 2016 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k16-highperformance-medium Template for Microsoft Windows Server 2016 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k16-server-large Template for Microsoft Windows Server 2016 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k16-server-medium Template for Microsoft Windows Server 2016 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k19-highperformance-large Template for Microsoft Windows Server 2019 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k19-highperformance-medium Template for Microsoft Windows Server 2019 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k19-server-large Template for Microsoft Windows Server 2019 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +windows2k19-server-medium Template for Microsoft Windows Server 2019 VM. A PVC with the Windows disk im... 3 (1 generated) 1 +``` + +Additionally, you may copy and customize these templates if you wish. The [template file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/rhel8-kiosk-with-svc.yaml) is an example of a customized template that was used to help develop this pattern. + +### Creating a VM from the Console via Template + +These templates can be run through the OpenShift Console from the Virtualization tab. Note the "Create VM" buttons on the right side of this picture: + +[![console-template-vm-1](/images/ansible-edge-gitops/aeg-console-vm-template-1.png)](/images/ansible-edge-gitops/aeg-console-vm-template-1.png) + +Clicking on the "Create VM" button will bring up a wizard that looks like this: + +[![console-template-wizard](/images/ansible-edge-gitops/console-vm-template-wizard.png)](/images/ansible-edge-gitops/console-vm-template-wizard.png) + +Accepting the defaults from this wizard will give a success screen: + +[![console-template-wizard-success](/images/ansible-edge-gitops/console-vm-template-wizard-success.png)](/images/ansible-edge-gitops/console-vm-template-wizard-success.png) + +Until it is deleted, you can monitor the machine's lifecycle from the VirtualMachines tab: + +[![console-monitor-vm](/images/ansible-edge-gitops/console-vm-spinning-up.png)](/images/ansible-edge-gitops/console-vm-spinning-up.png) + +This is a great way to gain familiarity with how the system works, but we might possibly want an interface we can use more programmatically. + +### Creating a VM from the command line via `oc process` + +This is a useful way to understand what kinds of objects OpenShift Virtualization creates and manages: + +```text +$ oc process -n openshift rhel8-desktop-medium | oc apply -f - +virtualmachine.kubevirt.io/rhel8-q63yuvxpjdvy18l7 created +``` + +You could also use the "Create VM Wizard" in the OpenShift console. + +### Another option - capturing template output and converting it into a Helm Chart + +See details [here](/patterns/ansible-edge-gitops/ideas-for-customization/#howto-define-your-own-vm-sets-from-scratch). + +## Components of the [virtual-machines](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) template + +### Setup - the mechanism for creating identifiers declaratively + +The first part of the template file sets up some variables that we will use later as the template is expanded. We use a +sequential numbering scheme for VM name creation because that is an easy way to make each item in the set declarative - it ensures that if you ask for 5 VMs of a particular type, they will have predictable names, and if one is deleted, it will be replaced by a VM with the same name. + +We use explicit "range" variables for the Go templating. This is because the implicit range variable is easily "trampled", and we have at least two different dimensions to iterate on - vm "role" and "index" within that role. + +### The External Secret - SSH pubkey + +The first item we define as part of this structure is an external secret to hold an SSH pubkey. This pubkey will be mounted in the VM under an unprivileged user's home directory - and generally that unprivileged user is expected to be able to sudo root without password. By default, RHEL images are configured to only allow SSH access via pubkey. In this pattern, the private key and public key for the SSH connections are loaded into both Vault (which we inherited from previous patterns) and Ansible Automation Platform. + +Since the keys are defined per VM "group", it is possible and expected that you could have different keypairs for different groups of VMs. Nothing would prevent you from using the same keypair for all machines if you have different groups, though. + +While the pubkey is not truly a "secret", the availability of the External Secrets Operator made for a nice opportunity to allow for variance in configuration without necessarily requiring local customization of the pattern. The OpenShift Virtualization model has no way of knowing that multiple servers may have the same SSH credentials, and in fact cannot depend on this. So it creates a pubkey object by default for each VM, and we imitate this behavior in the pattern. + +### The VirtualMachine definition + +The VirtualMachine definition is the biggest part of the template. All of it is derived from customization of the default templates that OpenShift Virtualization installs in the `openshift` namespace - especially most of the labels and annotations, with the following exceptions: + +#### labels + +* app + +This is set to `$identifier` to match a general pattern with other applications. + +* edge-gitops-role + +This is set explicitly and used elsewhere in this pattern to help identify resources by role. The intention is to be able to use the edge-gitops-role as a selector for targeting various kind of queries, including (especially) Ansible inventories. Though please note - because of the way Kubernetes (and OpenShift) work, when you connect to a VM with Ansible you are connecting to the *Service* object directly, not to the VM. (Another way to look at it is that the Service object is providing network abstraction over the VM object.) + +Other resources in the rest of the VirtualMachine definition are copied from the default template, with appropriate Helm variables included. + +#### Initial user access + +Note that the initial user (default: `cloud-user`) and initial password are customizable via values overrides. The `kiosk` type shows an example of how to either use a user/password specific to the type or a default for the chart using the `coalesce` function. + +### The Service definition + +The Service definition is potentially complex. The purpose of this Service object is to expose all of the needed TCP and UDP network ports within the cluster. (Providing access to them from outside the cluster would require Route or Ingress objects, and would have some significant security implications; access to these entities from outside the cluster is not the focus of this pattern, so we do not provide it at this time.) + +A given VM may expose one port (for Ansible access, you need at least TCP/22), or it may expose many ports. You are free to define a service per port if you like, but it seems more convenient to define them all as a single service. + +One aspect of the templating you may find interesting is the use of the toPrettyJson filter in Go. Since YAML is a proper superset of JSON, this is a neat trick that allows to include a nested data structure without having to worry about how to indent it. (As toPrettyJson uses the square bracket ([]) and curly bracket ({}) notation for arrays and hashes, YAML can interpret it without worrying about its indentation. + +## Accessing the VMs + +There are three mechanisms for access to these VMs: + +### Ansible - keypair authentication + +The ssh keypairs from your values-secret.yaml are loaded into both Vault and AAP for use later. The pattern currently +defines one such keypair, `kiosk-ssh`, but could support more, such as `iot-ssh`, `gateway-ssh`, etc. more details on how to expand on this pattern are described below. + +AAP only needs the private key and the username as a machine credential. The public key is not truly a secret, but it seemed interesting and useful to use the external secret operator to associate the public key with VM instances this way and prevent having to diverge from the upstream pattern to include local ssh pubkey specifications. + +Note that the default SSH setting for RHEL does not allow password-based logins via SSH, and it's at the very least inconvenient to copy the SSH private key into a VM inside the cluster, so the typical way the keypair will be used is through Ansible. + +### Virtual Machine Console Access via OpenShift Console + +Navigate to Virtualization -> VirtualMachines and make sure Project: All Projects or edge-gitops-vms is selected: + +[![show-vms](/images/ansible-edge-gitops/aeg-show-vms.png)](/images/ansible-edge-gitops/aeg-show-vms.png) + +Click on the "three dots" menu on the right, which will open a dialog like the following: + +[![show-vm-open-console](/images/ansible-edge-gitops/aeg-open-vm-console.png)](/images/ansible-edge-gitops/aeg-open-vm-console.png) + +*Note:* In OpenShift Virtualization 4.11, the "Open Console" option appears when you click on the virtual machine name in openshift console. The dialog looks like this: + +[![kubevirt411-vm-open-console](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png)](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png) + +The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section. + +### Initial User login (cloud-user) + +In general, and before the VMs have been configured by the Ansible Jobs, you can log in to the VMs on the console using the user and password you specified in the Helm chart, or else you can look at the VirtualMachine object and see what the username and password setting are. The pattern, by design, replaces the typical console view with Firefox running in kiosk mode. But this mechanism can still be used if you change the console from "VNC Console" to "Serial Console". + +# The "extra" VM Template + +Also included in the edge-gitops-vms chart is a separate template that will allow the creation of VMs with similar (though not identical characteristics) to the ones defined in the chart. + +The [rhel8-kiosk-with-svc](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/rhel8-kiosk-with-svc.yaml) template is preserved as an intermediate step to creating your own VM types, to see how the pipeline from default VM template -> customized template -> Helm-variable chart can work. + +# Next Steps + +## [Help & Feedback](https://groups.google.com/g/validatedpatterns) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/modern-virtualization/troubleshooting.md b/content/patterns/modern-virtualization/troubleshooting.md new file mode 100644 index 000000000..1cc54f822 --- /dev/null +++ b/content/patterns/modern-virtualization/troubleshooting.md @@ -0,0 +1,11 @@ +--- +title: Troubleshooting +weight: 70 +aliases: /ansible-edge-gitops/troubleshooting/ +--- + +# Troubleshooting + +## Our [Issue Tracker](https://github.com/validatedpatterns/ansible-edge-gitops/issues) + +Please file an issue if you see a problem!