Skip to content

Commit c897127

Browse files
authored
feat: add GH Actions integration workflow examples for HRM and cluster provisioner (#56)
add EPS integration workflow examples for hrm and cluster provisioner. update documentation and add reference eps workflows diagram.
1 parent 3327614 commit c897127

13 files changed

+1140
-69
lines changed

examples/eps_to_csv/config.ini

Lines changed: 0 additions & 12 deletions
This file was deleted.
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Configuration for generating Source of Truth (SOT) CSV files from EPS data.
2+
3+
[sot_columns]
4+
# Defines the exact column names expected in the *final* Cluster Intent and template SOT CSV files.
5+
# These are the target column names after any renaming specified in [rename_columns].
6+
# If a column is NOT renamed below, its name here MUST match the corresponding column name in EPS or else that column will be ignored in csv generation
7+
# The order of columns in the lists below dictates the exact column order in the generated CSV file.
8+
9+
#Column names of the Cluster Intent SOT to be generated
10+
cluster_intent_sot=["store_id", "zone_name", "machine_project_id", "fleet_project_id", "cluster_name", "location", "node_count", "cluster_ipv4_cidr", "services_ipv4_cidr", "external_load_balancer_ipv4_address_pools", "sync_repo", "sync_branch", "sync_dir", "secrets_project_id", "git_token_secrets_manager_name", "cluster_version", "maintenance_window_start", "maintenance_window_end", "maintenance_window_recurrence", "subnet_vlans", "recreate_on_delete"]
11+
12+
#Column names of the Cluster template SOT to be generated.
13+
cluster_data_sot = ["cluster_name","cluster_group","project_id","cluster_tags","country_code","store_id","gateway_ip","bos_vm_ip","qsrsoft_vm_ip","gsc01_vm_ip","gsc02_vm_ip","cluster_viewer_groups","vm_support_groups","vm_migrate_groups"]
14+
15+
[rename_columns]
16+
# Specifies mappings for renaming columns from the source EPS data to the desired final column names defined in the [sot_columns] section above.
17+
# Format: <source_eps_column_name> = <target_sot_column_name>
18+
name = cluster_name
19+
group = cluster_group
20+
tags = cluster_tags
21+
unique_zone_id = store_id

examples/eps_to_csv/README.md renamed to examples/eps_to_csv/resources/README.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# EPS to CSV Converter
22

3-
This set of script and configuration is designed to fetch cluster data from an EPS API and generate CSV files suitable for use as Source of Truth (SoT) data, specifically for cluster intent and cluster template data.
3+
This set of script and csv configuration defined in [config directory](../config) is designed to fetch cluster data from an EPS API and generate CSV files suitable for use as Source of Truth (SoT) data, specifically for cluster intent and cluster template data.
44

55
## Purpose
66

7-
The primary goal is to automate the retrieval and transformation of cluster information from an EPS system into standardized CSV formats. These CSV files can then be used for potential integrations with HRM, Cluster Provisioner and other tools that use GitOps workflows.
7+
The primary goal is to automate the retrieval and transformation of cluster information from EPS system into standardized CSV formats. These CSV files can then be used for potential integrations with HRM, Cluster Provisioner and other tools that use GitOps workflows.
88

99
## Components
1010

@@ -14,7 +14,7 @@ The script requires Google Cloud [Application Default Credentials](https://cloud
1414
* A service Account credentials file (`GOOGLE_APPLICATION_CREDENTIALS` env variable to be set to the path of Service Account credentials)
1515
* Run from a Workload Identity Federated location (which sets up `GOOGLE_APPLICATION_CREDENTIALS` for you eg: GitHub Actions)
1616
* If gcloud SDK is installed, the credentials generated by `gcloud auth application-default login --impersonate-service-account`
17-
* The attached service account returned by the metadata server if run from Compute Engine, Cloud Run
17+
* The attached service account returned by the metadata server if run from Compute Engine, Cloud Run, GKE ..etc
1818

1919
### Required Environment Variables
2020

@@ -39,6 +39,9 @@ This file defines the structure and renaming rules for the output CSV files.
3939
* **`[sot_columns]`**: Specifies the exact column names expected in the final CSVs for both `cluster_intent_sot` and `cluster_data_sot`. These names should align with the data structure in the EPS system. If the columns are to be generated with different names to that of EPS, please specify them explicitly in `rename_columns` below.
4040
* **`[rename_columns]`**: Defines rules for renaming columns fetched from the EPS API to the desired names in the processed DataFrame before CSV generation (e.g., `name = cluster_name`.. This would fetch the cluster attribute `name` from EPS and generate the data under the csv column `cluster_name` ).
4141

42+
* Please [check here](../config/sot_csv_config.ini) for a sample config file.
43+
44+
4245
### Main Conversion Script (`eps_to_csv_converter.py`)
4346

4447
This Python script orchestrates the entire process:
@@ -50,7 +53,7 @@ This Python script orchestrates the entire process:
5053
* Makes a GET request to the EPS API endpoint (`/api/v1/clusters`) to retrieve cluster data in JSON format.
5154
* **Processes Data**:
5255
* Flattens the nested JSON response from the API into a Pandas DataFrame.
53-
* Removes specified prefixes (like `data_`, `intent_`) from flattened column names.
56+
* Removes prefixes (like `data_`, `intent_`) from flattened column names for the defined source_of_truth_type.
5457
* Handles potential duplicate column names arising from prefix removal.
5558
* Applies the specific column renaming rules defined in `config.ini`.
5659
* **Generates CSVs**:
@@ -59,7 +62,7 @@ This Python script orchestrates the entire process:
5962

6063
### Install Dependencies
6164

62-
These can be installed from the utils directory using :
65+
These can be installed from the requirements file in the same directory as the python script using :
6366

6467
``` pip install -r requirements.txt```
6568

0 commit comments

Comments
 (0)