Skip to content

Commit 03d6697

Browse files
committed
Research podman kube and openshift tests
1 parent 2558b43 commit 03d6697

File tree

1 file changed

+359
-0
lines changed

1 file changed

+359
-0
lines changed
Lines changed: 359 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,359 @@
1+
---
2+
title: Research Openshift tests in Zuul alternatives
3+
authors: mmassari
4+
---
5+
6+
This research is about giving an answer to [this card](https://github.com/orgs/packit/projects/7/views/1?pane=issue&itemId=43441287).
7+
8+
## Openshift tests using podman kube play
9+
10+
Following suggestions in [this research](https://packit.dev/research/testing/openshift-to-podman-kube-play) I have done a quick & dirty setup for running *packit-service openshift tests* using pods created with `podman kube play`.
11+
12+
### Quick and dirty steps for make them running
13+
14+
#### 1. We need to convert jinja templates in pure yaml files
15+
16+
This is a **dirty** ansible playbook for doing it
17+
18+
```yaml
19+
# ansible-playbook -vv -c local -i localhost render_templates.yaml
20+
---
21+
- name: Render jinja2 templates
22+
hosts: localhost
23+
vars:
24+
validate_certs: true
25+
service: "{{ lookup('env', 'SERVICE') | default('packit', True) }}"
26+
deployment: "dev"
27+
tenant: packit # MP+ tenant
28+
with_tokman: true
29+
with_fedmsg: true
30+
with_redis_commander: false
31+
with_flower: false
32+
with_dashboard: true
33+
with_beat: true
34+
with_pushgateway: true
35+
with_repository_cache: false
36+
repository_cache_storage: 4Gi
37+
push_dev_images: false
38+
with_fluentd_sidecar: false
39+
postgres_version: 13
40+
image: quay.io/packit/packit-service:{{ deployment }}
41+
image_worker: quay.io/packit/packit-worker:{{ deployment }}
42+
image_fedmsg: quay.io/packit/packit-service-fedmsg:{{ deployment }}
43+
image_dashboard: quay.io/packit/dashboard:{{ deployment }}
44+
image_tokman: quay.io/packit/tokman:{{ deployment }}
45+
image_fluentd: quay.io/packit/fluentd-splunk-hec:latest
46+
# project_dir is set in tasks/project-dir.yml
47+
path_to_secrets: "{{ project_dir }}/secrets/{{ service }}/{{ deployment }}"
48+
# to be used in Image streams as importPolicy:scheduled value
49+
auto_import_images: "{{(deployment != 'prod')}}"
50+
# used in dev/zuul deployment to tag & push images to cluster
51+
# https://github.com/packit/deployment/issues/112#issuecomment-673343049
52+
# container_engine: "{{ lookup('pipe', 'command -v podman 2> /dev/null || echo docker') }}"
53+
container_engine: docker
54+
celery_app: packit_service.worker.tasks
55+
celery_retry_limit: 2
56+
celery_retry_backoff: 3
57+
workers_all_tasks: 1
58+
workers_short_running: 0
59+
workers_long_running: 0
60+
distgit_url: https://src.fedoraproject.org/
61+
distgit_namespace: rpms
62+
sourcegit_namespace: "" # fedora-source-git only
63+
pushgateway_address: http://pushgateway
64+
# Check that the deployment repo is up-to-date
65+
check_up_to_date: true
66+
# Check that the current vars file is up-to-date with the template
67+
check_vars_template_diff: true
68+
deployment_repo_url: https://github.com/packit/deployment.git
69+
# used by a few tasks below
70+
k8s_apply: false
71+
tokman:
72+
workers: 1
73+
resources:
74+
requests:
75+
memory: "88Mi"
76+
cpu: "5m"
77+
limits:
78+
memory: "128Mi"
79+
cpu: "50m"
80+
appcode: PCKT-002
81+
project: myproject
82+
host: https://api.crc.testing:6443
83+
api_key: ""
84+
validate_certs: false
85+
check_up_to_date: false
86+
push_dev_images: false # pushing dev images manually!
87+
check_vars_template_diff: false
88+
with_tokman: false
89+
with_fedmsg: false
90+
with_redis_commander: false
91+
with_flower: false
92+
with_beat: false
93+
with_dashboard: false
94+
with_pushgateway: false
95+
with_fluentd_sidecar: false
96+
managed_platform: false
97+
workers_all_tasks: 1
98+
workers_short_running: 0
99+
workers_long_running: 0
100+
path_to_secrets: "{{ project_dir }}/secrets/{{ service }}/{{ deployment }}"
101+
sandbox_namespace: "packit-dev-sandbox"
102+
packit_service_project_dir: "/home/maja/PycharmProjects/packit-service"
103+
tasks:
104+
- include_tasks: tasks/project-dir.yml
105+
- name: include variables
106+
ansible.builtin.include_vars: "{{ project_dir }}/vars/{{ service }}/{{ deployment }}.yml"
107+
tags:
108+
- always
109+
110+
- name: Getting deploymentconfigs
111+
include_tasks: tasks/set-facts.yml
112+
tags:
113+
- always
114+
115+
- name: Include extra secret vars
116+
ansible.builtin.include_vars:
117+
file: "{{ path_to_secrets }}/extra-vars.yml"
118+
name: vault
119+
tags:
120+
- always
121+
122+
# to be able to read the github_app_id from the configuration file in tokman
123+
- name: include packit-service configuration
124+
ansible.builtin.include_vars:
125+
file: "{{ path_to_secrets }}/packit-service.yaml"
126+
name: packit_service_config
127+
tags:
128+
- tokman
129+
130+
- name: include extra secret vars
131+
ansible.builtin.include_vars: "{{ path_to_secrets }}/extra-vars.yml"
132+
tags:
133+
- always
134+
135+
136+
- name: render templates
137+
ansible.builtin.template:
138+
src: "{{ project_dir }}/openshift/redis.yml.j2"
139+
dest: /tmp/redis.yaml
140+
- name: render templates
141+
ansible.builtin.template:
142+
src: "{{ project_dir }}/openshift/postgres.yml.j2"
143+
dest: /tmp/postgres.yaml
144+
- name: render templates
145+
ansible.builtin.template:
146+
src: "{{ project_dir }}/openshift/packit-service.yml.j2"
147+
dest: /tmp/packit-service.yaml
148+
- name: render templates
149+
vars:
150+
component: packit-worker
151+
queues: "short-running,long-running"
152+
worker_replicas: "1"
153+
worker_requests_memory: "384Mi"
154+
worker_requests_cpu: "100m"
155+
worker_limits_memory: "1024Mi"
156+
worker_limits_cpu: "400m"
157+
ansible.builtin.template:
158+
src: "{{ project_dir }}/openshift/packit-worker.yml.j2"
159+
dest: /tmp/packit-worker.yaml
160+
- name: render postgres templates
161+
ansible.builtin.template:
162+
src: "{{ project_dir }}/openshift/secret-postgres.yml.j2"
163+
dest: /tmp/secret-postgres.yaml
164+
- name: render packit-secrets templates
165+
ansible.builtin.template:
166+
src: "{{ project_dir }}/openshift/secret-packit-secrets.yml.j2"
167+
dest: /tmp/secret-packit-secrets.yaml
168+
- name: render packit-config templates
169+
ansible.builtin.template:
170+
src: "{{ project_dir }}/openshift/secret-packit-config.yml.j2"
171+
dest: /tmp/secret-packit-config.yaml
172+
- name: render secret sentry templates
173+
ansible.builtin.template:
174+
src: "{{ project_dir }}/openshift/secret-sentry.yml.j2"
175+
dest: /tmp/secret-sentry.yaml
176+
- name: render secret splunk templates
177+
ansible.builtin.template:
178+
src: "{{ project_dir }}/openshift/secret-splunk.yml.j2"
179+
dest: /tmp/secret-splunk.yaml
180+
- name: render secret ssh templates
181+
ansible.builtin.template:
182+
src: "{{ project_dir }}/openshift/secret-packit-ssh.yml.j2"
183+
dest: /tmp/secret-packit-ssh.yaml
184+
- name: render secret aws templates
185+
ansible.builtin.template:
186+
src: "{{ project_dir }}/openshift/secret-aws.yml.j2"
187+
dest: /tmp/secret-aws.yaml
188+
```
189+
190+
#### 2. Tweak the generated yaml files and play our main pods locally
191+
192+
We are now able to run redis, postgres, service and worker locally using the Openshift configuration files and podman kube play.
193+
194+
```bash
195+
podman kube play /tmp/redis.yaml
196+
podman kube play /tmp/secret-postgres.yaml
197+
podman kube play /tmp/postgres.yaml
198+
podman kube play /tmp/secret-packit-secrets.yaml
199+
podman kube play /tmp/secret-packit-config.yaml
200+
podman kube play /tmp/secret-sentry.yaml
201+
podman kube play /tmp/secret-splunk.yaml
202+
podman kube play --replace /tmp/packit-service.yaml
203+
podman kube play /tmp/secret-packit-ssh.yaml
204+
podman kube play /tmp/secret-aws.yaml
205+
sed -i "s/resources:/securityContext:\n runAsUser: 1024\\n runAsNonRoot: true\\n resources:/" /tmp/packit-service.yaml
206+
sed -i "s/StatefulSet/Deployment/" /tmp/packit-worker.yaml
207+
sed -i "s/resources:/securityContext:\n runAsUser: 1024\\n runAsNonRoot: true\\n resources:/" /tmp/packit-worker.yaml
208+
podman kube play --replace /tmp/packit-worker.yaml
209+
```
210+
211+
#### 3. Run openshift packit-service tests locally
212+
213+
Apply this patch to packit-service repo **adjusting** in it the `hostPath` to where the packit-service repo is:
214+
215+
```diff
216+
diff --git a/files/test-in-openshift.yaml b/files/test-in-openshift.yaml
217+
index 0d555a13..2740aa4b 100644
218+
--- a/files/test-in-openshift.yaml
219+
+++ b/files/test-in-openshift.yaml
220+
@@ -1,9 +1,12 @@
221+
---
222+
-kind: Job
223+
-apiVersion: batch/v1
224+
+kind: Deployment
225+
+apiVersion: apps/v1
226+
metadata:
227+
name: packit-tests
228+
spec:
229+
+ replicas: 1
230+
+ strategy:
231+
+ type: Recreate
232+
template:
233+
spec:
234+
volumes:
235+
@@ -12,10 +15,13 @@ spec:
236+
- name: packit-config
237+
secret: { secretName: packit-config }
238+
- name: test-src-pv
239+
- persistentVolumeClaim: { claimName: test-src-pvc }
240+
+ hostPath:
241+
+ path: "/home/maja/PycharmProjects/packit-service/"
242+
+ type: Directory
243+
+ #persistentVolumeClaim: { claimName: test-src-pvc }
244+
- name: test-data-pv
245+
persistentVolumeClaim: { claimName: test-data-pvc }
246+
- restartPolicy: Never
247+
+ #restartPolicy: Never
248+
containers:
249+
- name: packit-tests
250+
image: quay.io/packit/packit-service-tests:stg
251+
@@ -41,11 +47,15 @@ spec:
252+
- name: packit-config
253+
mountPath: /home/packit/.config
254+
- name: test-src-pv
255+
- mountPath: /src
256+
+ mountPath: /src:Z
257+
- name: test-data-pv
258+
mountPath: /tmp/test_data
259+
- command: ["bash", "/src/files/run_tests.sh"]
260+
- backoffLimit: 1
261+
+ #privileged: true
262+
+ #securityContext:
263+
+ # runAsUser: 1024
264+
+ # runAsNonRoot: true
265+
+ command: ["/bin/bash"]
266+
+ args: ["-c", "sleep 1800"]
267+
---
268+
kind: PersistentVolumeClaim
269+
apiVersion: v1
270+
diff --git a/files/test-src-mounter.yaml b/files/test-src-mounter.yaml
271+
index 20ec681b..297064a5 100644
272+
--- a/files/test-src-mounter.yaml
273+
+++ b/files/test-src-mounter.yaml
274+
@@ -13,6 +13,6 @@ spec:
275+
- name: packit-tests
276+
image: quay.io/packit/packit-service-tests:stg
277+
volumeMounts:
278+
- - mountPath: /src
279+
+ - mountPath: /home/maja/PycharmProjects/packit-service/
280+
name: test-src-pv
281+
command: ["bash", "-c", "sleep 10000"]
282+
```
283+
284+
Now you can run the packit-service openshift tests using podman kube instead of starting the *service* and the *worker*; remember of running a `podman kube play --down /tmp/xxx.yaml` for every line above where you have used `podman kube play /tmp/xxx.yaml`
285+
286+
```bash
287+
podman kube play /tmp/redis.yaml
288+
podman kube play /tmp/secret-postgres.yaml
289+
podman kube play /tmp/postgres.yaml
290+
podman kube play /tmp/secret-packit-secrets.yaml
291+
podman kube play /tmp/secret-packit-config.yaml
292+
podman kube play --replace /home/maja/PycharmProjects/packit-service/files/test-src-mounter.yaml
293+
podman kube play --replace /home/maja/PycharmProjects/packit-service/files/test-in-openshift-get-data.yaml
294+
podman kube play --replace /home/maja/PycharmProjects/packit-service/files/test-in-openshift.yaml
295+
podman exec -ti packit-tests-pod-packit-tests /bin/bash
296+
sh /src/files/run_tests.sh
297+
```
298+
299+
There will be two failing tests:
300+
301+
```
302+
============================================== short test summary info ===============================================
303+
FAILED tests_openshift/openshift_integration/test_pkgtool.py::Pkgtool::test_pkgtool_clone - requre.exceptions.ItemNotInStorage: Keys not in storage:/src/tests_openshift/openshift_integration/test_data/test...
304+
FAILED tests_openshift/openshift_integration/test_sandcastle.py::test_get_api_client - kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
305+
========================== 2 failed, 172 passed, 3 skipped, 37 warnings in 70.40s (0:01:10) ==========================
306+
```
307+
308+
I think the first one can be fixed improving this setup but not the second one:
309+
310+
```python
311+
________________________________________________ test_get_api_client _________________________________________________
312+
313+
def test_get_api_client():
314+
"""let's make sure we can get k8s API client"""
315+
> assert sandcastle.Sandcastle.get_api_client()
316+
317+
tests_openshift/openshift_integration/test_sandcastle.py:9:
318+
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
319+
/usr/local/lib/python3.9/site-packages/sandcastle/api.py:324: in get_api_client
320+
load_kube_config(client_configuration=configuration)
321+
/usr/local/lib/python3.9/site-packages/kubernetes/config/kube_config.py:792: in load_kube_config
322+
loader = _get_kube_config_loader(
323+
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
324+
325+
filename = '~/.kube/config', config_dict = None, persist_config = True
326+
kwargs = {'active_context': None, 'config_persister': <bound method KubeConfigMerger.save_changes of <kubernetes.config.kube_config.KubeConfigMerger object at 0x7ff3a1ea7d30>>}
327+
kcfg = <kubernetes.config.kube_config.KubeConfigMerger object at 0x7ff3a1ea7d30>
328+
329+
def _get_kube_config_loader(
330+
filename=None,
331+
config_dict=None,
332+
persist_config=False,
333+
**kwargs):
334+
if config_dict is None:
335+
kcfg = KubeConfigMerger(filename)
336+
if persist_config and 'config_persister' not in kwargs:
337+
kwargs['config_persister'] = kcfg.save_changes
338+
339+
if kcfg.config is None:
340+
> raise ConfigException(
341+
'Invalid kube-config file. '
342+
'No configuration found.')
343+
E kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
344+
345+
/usr/local/lib/python3.9/site-packages/kubernetes/config/kube_config.py:751: ConfigException
346+
```
347+
348+
## Summary
349+
350+
[`podman kube play`](https://docs.podman.io/en/v4.4/markdown/podman-kube-play.1.html) can not be used to test:
351+
- *sandcastle*; we need a k8s cluster to be able to use the `kubernates` library. We could deploy pods in the cluster using [`podman kube apply`](https://docs.podman.io/en/v4.4/markdown/podman-kube-apply.1.html) but still we need an up and running cluster.
352+
- *deployment*; the Openshift tests in the *deployment* repo are checking that pods are up and running on an Openshift dev instance; we can not check the same using `podman kube play` or `podman kube apply` (we would test different deployment settings...).
353+
354+
`podman kube play` can be used for openshift tests in *packit-service* project not related with *sandcastle* but `docker-compose` should be enough for these as well; so I don't really see advantages in using `podman kube play`.
355+
356+
For tests in *deployment*, *sandcastle* and in *packit-service* (which reference *sandcastle*) we still need a running k8s cluster.
357+
358+
If I get it correctly, the **strimzi** project has tests running on Testing Farm using *minikube*: https://developers.redhat.com/articles/2023/08/17/how-testing-farm-makes-testing-your-upstream-project-easier#
359+
For this reason I think we can probably make something similar using Openshift (maybe using Openshift Local - I think it makes sense to test everything against an Openshift instance).

0 commit comments

Comments
 (0)