Skip to content

Commit 96fad72

Browse files
authored
Merge pull request #530 from mhjacks/add_feo
Add FEO docs
2 parents 36cd68c + afac8f1 commit 96fad72

File tree

8 files changed

+364
-2
lines changed

8 files changed

+364
-2
lines changed

content/blog/2025-01-09-AGOF_v2.adoc

+2-2
Original file line numberDiff line numberDiff line change
@@ -290,7 +290,7 @@ An AGOF Pattern MUST define the following repositories:
290290
1. AGOF repository (default: https://github.com/validatedpatterns/agof.git). This repository contains AGOF itself,
291291
and is scaffolding for the rest of the process.
292292

293-
1. An Infrastructure as Code repository. This is the main "pattern" content. It contains an AAP configuration,
293+
2. An Infrastructure as Code repository. This is the main "pattern" content. It contains an AAP configuration,
294294
expressed in terms suitable for processing by the infra.aap_configuration collection. This repository will contain
295295
references to other resources, which are described immediately following.
296296

@@ -301,7 +301,7 @@ accomlishing a particular result. Multiple collection repositories may be define
301301
provided by collections available via Ansible Galaxy or Automation Hub, it is still necessary to provide a playbook
302302
to serve as the basis for a Job Template in AAP to do the configuration work.
303303

304-
1. One or more inventory repositories. Ansible Good Practices state that inventories should be separated from
304+
2. One or more inventory repositories. Ansible Good Practices state that inventories should be separated from
305305
the content. This allows for using separate inventories with the same collection codebase - a feature that users
306306
frequently requested from Ansible Edge GitOps because they wanted to change it from configuring virtual machines in
307307
AWS to use actual hardware nodes (for example). It would also be possible to have effectively an empty inventory and
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
---
2+
title: Federated Edge Observability
3+
date: 2025-02-01
4+
tier: sandbox
5+
summary: This pattern uses OpenShift Virtualization to simulate an edge environment for VMs which then report metrics via OpenTelemetry.
6+
rh_products:
7+
- Red Hat OpenShift Container Platform
8+
- Red Hat Ansible Automation Platform
9+
- Red Hat OpenShift Virtualization
10+
- Red Hat Enterprise Linux
11+
- Red Hat OpenShift Data Foundation
12+
industries:
13+
aliases: /federated-edge-observability
14+
links:
15+
install: getting-started
16+
help: https://groups.google.com/g/validatedpatterns
17+
bugs: https://github.com/validatedpatterns-sandbox/federated-edge-observability/issues
18+
ci: federatedobservability
19+
---
20+
21+
# Federated Edge Observability
22+
23+
## Background
24+
25+
Organizations are interested in accelerating their deployment speeds and improving delivery quality in their Edge environments, where many devices may not fully or even partially embrace the GitOps philosophy. Further, there are VMs and other devices that can and should be managed with Ansible. This pattern explores some of the possibilities of using an OpenShift-based Ansible Automated Platform deployment and managing Edge devices, based on work done with a partner in the Chemical space.
26+
27+
This pattern uses OpenShift Virtualization (the productization of Kubevirt) to simulate the Edge environment for VMs.
28+
29+
### Solution elements
30+
31+
- How to use a GitOps approach to manage virtual machines, either in public clouds (limited to AWS for technical reasons) or on-prem OpenShift installations
32+
- How to integrate AAP into OpenShift
33+
- How to manage Edge devices using AAP hosted in OpenShift
34+
35+
### Red Hat Technologies
36+
37+
- Red Hat OpenShift Container Platform (Kubernetes)
38+
- Red Hat Ansible Automation Platform (formerly known as "Ansible Tower")
39+
- Red Hat OpenShift GitOps (ArgoCD)
40+
- OpenShift Virtualization (Kubevirt)
41+
- Red Hat Enterprise Linux 9
42+
43+
### Other Technologies this Pattern Uses
44+
45+
- Hashicorp Vault
46+
- External Secrets Operator
47+
- OpenTelemetry
48+
- Grafana
49+
- Mimir
50+
51+
## Architecture
52+
53+
Similar to other patterns, this pattern starts with a central management hub, which hosts the AAP and Vault components, and the observability collection and visualization components.
54+
55+
## What Next
56+
57+
- [Getting Started: Deploying and Validating the Pattern](getting-started)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
---
2+
title: Ansible Automation Platform
3+
weight: 40
4+
aliases: /federated-edge-observability/ansible-automation-platform/
5+
---
6+
7+
# Ansible Automation Platform
8+
9+
## How to Log In
10+
11+
The default login user is `admin` and the password is generated randomly at install time; you will need the password to login in to the AAP interface. You do not have to log in to the interface - the pattern will configure the AAP instance; the pattern retrieves the password using the same technique as the `ansible_get_credentials.sh` script described below. If you want to inspect the AAP instance, or change any aspects of its configuration, there are two ways to login and look at it. Both mechanisms are equivalent; you get the same password to the same instance using either technique.
12+
13+
## Via the OpenShift Console
14+
15+
In the OpenShift console, navigate to Workloads > Secrets and select the "ansible-automation-platform" project if you want to limit the number of Secrets you can see.
16+
17+
[![secrets-navigation](/images/ansible-edge-gitops/ocp-console-secrets-aap-admin-password.png)](/images/ansible-edge-gitops/ocp-console-secrets-aap-admin-password.png)
18+
19+
The Secret you are looking for is in the `ansible-automation-platform` project and is named `controller-admin-password`. If you click on it, you can see the Data.password field. It is shown revealed below to show that it is the same as what is shown by the script method of retrieving it below:
20+
21+
[![secrets-detail](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png)](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png)
22+
23+
## Via [ansible_get_credentials.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_get_credentials.sh)
24+
25+
With your KUBECONFIG set, you can run `./scripts/ansible-get-credentials.sh` from your top-level pattern directory. This will use your OpenShift cluster admin credentials to retrieve the URL for your Ansible Automation Platform instance, as well as the password for its `admin` user, which is auto-generated by the AAP operator by default. The output of the command looks like this (your password will be different):
26+
27+
```text
28+
./scripts/ansible_get_credentials.sh
29+
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
30+
'all'
31+
32+
PLAY [Install manifest on AAP controller] ******************************************************************************
33+
34+
TASK [Retrieve API hostname for AAP] ***********************************************************************************
35+
ok: [localhost]
36+
37+
TASK [Set ansible_host] ************************************************************************************************
38+
ok: [localhost]
39+
40+
TASK [Retrieve admin password for AAP] *********************************************************************************
41+
ok: [localhost]
42+
43+
TASK [Set admin_password fact] *****************************************************************************************
44+
ok: [localhost]
45+
46+
TASK [Report AAP Endpoint] *********************************************************************************************
47+
ok: [localhost] => {
48+
"msg": "AAP Endpoint: https://controller-ansible-automation-platform.apps.mhjacks-aeg.blueprints.rhecoeng.com"
49+
}
50+
51+
TASK [Report AAP User] *************************************************************************************************
52+
ok: [localhost] => {
53+
"msg": "AAP Admin User: admin"
54+
}
55+
56+
TASK [Report AAP Admin Password] ***************************************************************************************
57+
ok: [localhost] => {
58+
"msg": "AAP Admin Password: CKollUjlir0EfrQuRrKuOJRLSQhi4a9E"
59+
}
60+
61+
PLAY RECAP *************************************************************************************************************
62+
localhost : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
63+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,242 @@
1+
---
2+
title: Getting Started
3+
weight: 10
4+
aliases: /federated-edge-observability/getting-started/
5+
---
6+
7+
# Deploying the Federated Edge Observability Pattern
8+
9+
# General Prerequisites
10+
11+
1. An OpenShift cluster ( Go to [the OpenShift console](https://console.redhat.com/openshift/create)). Currently this pattern only supports AWS. It could also run on a baremetal OpenShift cluster, because OpenShift Virtualization supports that; there would need to be some customizations made to support it as the default is AWS. We hope that GCP and Azure will support provisioning metal workers in due course so this can be a more clearly multicloud pattern.
12+
1. A GitHub account (and, optionally, a token for it with repositories permissions, to read from and write to your forks)
13+
1. The helm binary, see [here](https://helm.sh/docs/intro/install/)
14+
1. Ansible, which is used in the bootstrap and provisioning phases of the pattern install (and to configure Ansible Automation Platform).
15+
1. Please note that when run on AWS, this pattern will provision an additional worker node, which will be a metal instance (c5n.metal) to run the Edge Virtual Machines. This worker is provisioned through the OpenShift MachineAPI and will be automatically cleaned up when the cluster is destroyed.
16+
17+
The use of this pattern depends on having a running Red Hat
18+
OpenShift cluster. It is desirable to have a cluster for deploying the GitOps
19+
management hub assets and a separate cluster(s) for the managed cluster(s).
20+
21+
If you do not have a running Red Hat OpenShift cluster you can start one on a
22+
public or private cloud by using [Red Hat's cloud service](https://console.redhat.com/openshift/create).
23+
24+
# Credentials Required in Pattern
25+
26+
In addition to the openshift cluster, you will need to prepare a number of secrets, or credentials, which will be used
27+
in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/validatedpatterns-sandbox/federated-edge-observability/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows:
28+
29+
* AWS Credentials (an access key and a secret key). These are used to provision the metal worker in AWS (which hosts
30+
the VMs). If the portworx variant of the pattern is used, these credentials will be used to modify IAM rules to allow
31+
portworx to run correctly.
32+
33+
```yaml
34+
---
35+
# NEVER COMMIT THESE VALUES TO GIT
36+
version: "2.0"
37+
secrets:
38+
```
39+
* A username and SSH Keypair (private key and public key). These will be used to provide access to the Kiosk VMs in the demo.
40+
41+
```yaml
42+
- name: vm-ssh
43+
fields:
44+
- name: username
45+
value: 'Username of user to attach privatekey and publickey to - cloud-user is a typical value'
46+
47+
- name: privatekey
48+
value: 'Private ssh key of the user who will be able to elevate to root to provision kiosks'
49+
50+
- name: publickey
51+
value: 'Public ssh key of the user who will be able to elevate to root to provision kiosks'
52+
```
53+
54+
* A Red Hat Subscription Management username and password. These will be used to register Kiosk VM templates to the Red Hat Content Delivery Network and install content on the VMs and to install the Otel collector.
55+
56+
```yaml
57+
- name: rhsm
58+
fields:
59+
- name: username
60+
value: 'username of user to register RHEL VMs'
61+
- name: password
62+
value: 'password of rhsm user in plaintext'
63+
```
64+
65+
* A userData block to use with cloud-init. This will allow console login as the user you specify (traditionally cloud-user) with the password you specify. The value in cloud-init is used as the default; roles in the edge-gitops-vms chart can also specify other secrets to use by referencing them in the role block.
66+
67+
```yaml
68+
- name: cloud-init
69+
fields:
70+
- name: userData
71+
value: |-
72+
#cloud-config
73+
user: 'username of user for console, probably cloud-user'
74+
password: 'a suitable password to use on the console'
75+
chpasswd: { expire: False }
76+
```
77+
78+
* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest)
79+
80+
```yaml
81+
- name: aap-manifest
82+
fields:
83+
- name: b64content
84+
path: 'full pathname of file containing Satellite Manifest for entitling Ansible Automation Platform'
85+
base64: true
86+
```
87+
88+
```yaml
89+
- name: automation-hub-token
90+
fields:
91+
- name: token
92+
value: 'An automation hub token for retrieving Certified and Validated Ansible content'
93+
```
94+
95+
* An automation hub token generated at <https://console.redhat.com/ansible/automation-hub/token>. This is needed for
96+
the Ansible Configuration-as-Code tools.
97+
98+
```yaml
99+
- name: agof-vault-file
100+
fields:
101+
- name: agof-vault-file
102+
path: 'full pathname of a valid agof_vault file for secrets to overlay the iac config'
103+
base64: true
104+
```
105+
106+
* An (optional) AGOF vault file. For this pattern, use the following (you do not need additional secrets for this
107+
pattern):
108+
109+
```yaml
110+
- name: agof-vault-file
111+
fields:
112+
- name: agof-vault-file
113+
value: '---'
114+
base64: true
115+
```
116+
117+
```yaml
118+
- name: otel-cert
119+
fields:
120+
- name: tls.key
121+
path: 'full pathname to a pre-existing tls key'
122+
123+
- name: tls.crt
124+
path: 'full pathname to a pre-existing tls certificate'
125+
```
126+
127+
Certificates for the open telemetry collector infrastructure. "Snakeoil" (that is, self-signed) certs will automatically be generated by the makefile as follows by the `make snakeoil-certs` target, which is automatically run by `make install`:
128+
129+
```yaml
130+
- name: otel-cert
131+
fields:
132+
- name: tls.key
133+
path: ~/federated-edge-observability-otel-collector-edge-observability-stack.key
134+
135+
- name: tls.crt
136+
path: ~/federated-edge-observability-otel-collector-edge-observability-stack.crt
137+
```
138+
139+
# How to deploy
140+
141+
1. Login to your cluster using oc login or exporting the KUBECONFIG
142+
143+
```sh
144+
oc login
145+
```
146+
147+
or set KUBECONFIG to the path to your `kubeconfig` file. For example:
148+
149+
```sh
150+
export KUBECONFIG=~/my-ocp-env/hub/auth/kubeconfig
151+
```
152+
153+
1. Fork the [federated-edge-observability](https://github.com/validatedpatterns-sandbox/federated-edge-observability) repo on GitHub. It is necessary to fork to preserve customizations you make to the default configuration files.
154+
155+
1. Clone the forked copy of this repository.
156+
157+
```sh
158+
git clone [email protected]:your-username/ansible-edge-gitops.git
159+
```
160+
161+
1. Create a local copy of the Helm values file that can safely include credentials
162+
163+
WARNING: DO NOT COMMIT THIS FILE
164+
165+
You do not want to push personal credentials to GitHub.
166+
167+
```sh
168+
cp values-secret.yaml.template ~/values-secret.yaml
169+
vi ~/values-secret.yaml
170+
```
171+
172+
1. Customize the deployment for your cluster (Optional - the defaults in values-global.yaml are designed to work in AWS):
173+
174+
```sh
175+
git checkout -b my-branch
176+
vi values-global.yaml
177+
git add values-global.yaml
178+
git commit values-global.yaml
179+
git push origin my-branch
180+
```
181+
182+
Please review the [Patterns quick start](/learn/quickstart/) page. This section describes deploying the pattern using `pattern.sh`. You can deploy the pattern using the [validated pattern operator](/infrastructure/using-validated-pattern-operator/). If you do use the operator then skip to Validating the Environment below.
183+
184+
1. (Optional) Preview the changes. If you'd like to review what is been deployed with the pattern, `pattern.sh` provides a way to show what will be deployed.
185+
186+
```sh
187+
./pattern.sh make show
188+
```
189+
190+
1. Apply the changes to your cluster. This will install the pattern via the Validated Patterns Operator, and then run any necessary follow-up steps.
191+
192+
```sh
193+
./pattern.sh make install
194+
```
195+
196+
The installation process will take between 45-60 minutes to complete.
197+
198+
# Installation Validation
199+
200+
* Check the operators have been installed using the OpenShift console
201+
202+
```text
203+
OpenShift Console Web UI -> Installed Operators
204+
```
205+
206+
![federated-edge-observability-operators](/images/federated-edge-observability/FEO-operators.png "Federated Edge Observability Operators")
207+
208+
* Check all applications are synchronised
209+
210+
Under the project `federated-edge-observability-hub` click on the URL for the `hub`gitops`server`. All applications will sync, but this takes time as ODF has to completely install, and OpenShift Virtualization cannot provision VMs until the metal node has been fully provisioned and ready.
211+
212+
![federated-edge-observability-applications](/images/federated-edge-observability/FEO-applications.png "Federated Edge Observability Applications")
213+
214+
* Under Virtualization > Virtual Machines, the virtual machines will eventually show as "Running." Once they are in "Running" state the Provisioning workflow will run on them, install the OpenTelemetry collector, and start reporting metrics to the Edge Observability Stack in the hub cluster.
215+
216+
![federated-edge-observability-vms](/images/federated-edge-observability/FEO-vms.png "Federated Edge Observability Virtual Machines")
217+
218+
* The Grafana graphs should be receiving data and drawing graphs for each of the nodes:
219+
220+
![federated-edge-observability-grafana](/images/federated-edge-observability/FEO-grafana.png "Federated Edge Observability Graphs")
221+
222+
Please see [Ansible Automation Platform](/federated-edge-observability/ansible-automation-platform/) for more information on how this pattern uses the Ansible Automation Platform Operator for OpenShift.
223+
224+
# Infrastructure Elements of this Pattern
225+
226+
## [Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible)
227+
228+
A fully functional installation of the Ansible Automation Platform operator is installed on your OpenShift cluster to configure and maintain the VMs for this demo. AAP maintains a dynamic inventory of kiosk machines and can configure a VM from template to fully functional kiosk in about 10 minutes.
229+
230+
## OpenShift [Virtualization](https://docs.openshift.com/container-platform/4.16/virt/about_virt/about-virt.html)
231+
232+
OpenShift Virtualization is a Kubernetes-native way to run virtual machine workloads. It is used in this pattern to host VMs simulating an Edge environment; the chart that configures the VMs is designed to be flexible to allow easy customization to model different VM sizes, mixes, versions and profiles for future pattern development.
233+
234+
## HashiCorp [Vault](https://www.vaultproject.io/)
235+
236+
Vault is used as the authoritative source for the Kiosk ssh pubkey via the External Secrets Operator.
237+
As part of this pattern HashiCorp Vault has been installed. Refer to the section on [Vault](https://validatedpatterns.io/secrets/vault/).
238+
239+
# Next Steps
240+
241+
## [Help & Feedback](https://groups.google.com/g/validatedpatterns)
242+
## [Report Bugs](https://github.com/validatedpatterns-sandbox/federated-edge-observability/issues)
Loading
Loading
Loading
Loading

0 commit comments

Comments
 (0)