You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix review findings and restrict ODF to disconnected mode
- Pass shell variables to inline Python via os.environ instead of
string interpolation to prevent injection risks
- Redact credentials from setup_ceph.sh stdout output
- Use if/else for prometheus module enable instead of unconditional
success after || true
- Quote quayBackendRGWConfiguration values in odf.sh for YAML safety
- Add language identifiers to fenced code blocks in ODF_CEPH_CI.md
- Fix generate_enclave_vars.sh summary to reflect ODF/RadosGWStorage
when storage plugin is odf
- Download cephadm from official download.ceph.com with GitHub fallback
- Remove connected ODF job (ODF only runs in disconnected mode)
- When ODF is selected, skip LVMS disconnected to avoid parallel runs
- Reduce master VM extra disk to 60G for ODF (storage is on LZ)
Copy file name to clipboardExpand all lines: docs/ODF_CEPH_CI.md
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ This document describes the containerized Ceph cluster used to provide external
6
6
7
7
ODF in external mode connects to a pre-existing Ceph cluster rather than deploying its own. For CI, we run a single-node Ceph cluster on the Landing Zone VM using cephadm. All Ceph daemons run as podman containers on the LZ, which shares the same libvirt network as the OpenShift nodes.
8
8
9
-
```
9
+
```text
10
10
CI Runner Machine (runs: [self-hosted, enclave-large])
11
11
├── libvirt VMs
12
12
│ ├── Landing Zone VM (192.168.X.2)
@@ -32,7 +32,7 @@ CI Runner Machine (runs: [self-hosted, enclave-large])
32
32
33
33
Cephadm filters out raw loop devices, so each OSD uses an LVM stack:
Ceph runs on the Landing Zone VM, which is on the same libvirt cluster network as the OpenShift master nodes. All communication is direct:
71
71
72
-
```
72
+
```text
73
73
Master node (192.168.X.10) -> Landing Zone (192.168.X.2:9283/7480) -- same L2 network
74
74
```
75
75
76
76
No firewall configuration is needed. No gateway routing. No SDN workarounds.
77
77
78
78
## How ODF Config Flows to the Cluster
79
79
80
-
```
80
+
```text
81
81
setup_ceph.sh runs on LZ (via SSH from CI runner)
82
82
↓ writes files to ~/ceph-config/
83
83
├── odf_external_config.json
@@ -104,7 +104,7 @@ The `ODF_EXTERNAL_CONFIG` and `QUAY_BACKEND_RGW_CONFIG` environment variables ar
104
104
105
105
### Runner Labels
106
106
107
-
ODF runs use the same runner labels as LVMS: `[self-hosted, enclave-large]`. No special `odf`runner label is required since Ceph is deployed dynamically on the Landing Zone.
107
+
ODF runs use the runner labels `[self-hosted, enclave-large, odf]`. The `odf` label ensures ODF jobs are routed to runners with sufficient disk space for Ceph loopback OSDs.
108
108
109
109
### Ceph Setup Step
110
110
@@ -121,7 +121,6 @@ This step is conditional on `STORAGE_PLUGIN == 'odf'` and is skipped for LVMS ru
121
121
122
122
| Command | Description |
123
123
|---------|-------------|
124
-
|`/test e2e-connected-odf`| Connected mode E2E with ODF storage |
125
124
|`/test e2e-disconnected-odf`| Disconnected mode E2E with ODF storage |
0 commit comments