Run cucumber tests out of cluster#2372
Conversation
Hello sylvainsenechal,My role is to assist you with the merge of this Available options
Available commands
Status report is not available. |
Jira issue not foundThe Jira issue ZENKO-5209 was not found. |
d4dac97 to
f3e4d82
Compare
There was a problem hiding this comment.
Not a big fan of this file, quite long, with a flag depending if it runs for ctst or zenko tests.
In a next pr i wanna refine it to make it work for all tests without flags
I also wanna do something about world parameters vs env variables.
Currently for cucumber tests, what we have is :
- Env var XXX is defined in end2end.yaml
- In this file, we create a world parameters thats Xxx=XXX
- Cucumber tests uses this.parameters.Xxx
We could remove one level of indirection and directly use env variables in cucumber tests.
This would also be helpful to track env variable utilisation better, because here the world parameters are sometimes renamed, example :
KeycloakUsername = $OIDC_USERNAME
So when grepping OIDC_USERNAME, you can't directly see that it's used because the test uses this.parameters.KeycloakUsername
There was a problem hiding this comment.
Not directly related to this pr but didn't want to create a ticket just for that :
azure/docker-entrypoint and the following dockerfile aren't used, what's used is the azure-mock.yaml file where the image is directly defined inside of it.
Btw, in the deleted dockerfile, there is a typo in the azurite version with an extra whitespace, so not even sure it was working before 🧐
| containers: | ||
| - image: ghcr.io/scality/cloudserver:8.8.59 | ||
| name: aws-mock | ||
| ports: |
There was a problem hiding this comment.
defined again line 83
| }; | ||
|
|
||
| type VolumeGetConfig = { | ||
| targetZenkoKubeconfigPath?: string; |
There was a problem hiding this comment.
These are translated into cli commands for drctlt, and they were misnamed
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
|
e2cc589 to
2acec2d
Compare
|
|
||
| After deployment is done, which you can follow by opening another terminal, you will be able to access S3 service through a port-forward. | ||
| First find a cloudserver connector using the following command: | ||
| After deployment is done, the devcontainer setup configures ingress endpoints and `/etc/hosts` entries |
There was a problem hiding this comment.
nit: not for now, but using /etc/hosts is probably a nogo for local development... so we'll eventually need to come back to this
|
|
||
| ```bash | ||
| aws s3 ls --endpoint http://localhost:8080 | ||
| aws s3 ls --endpoint http://s3.zenko.local |
There was a problem hiding this comment.
endpoint can also be set in configuration. For exemple in my .aws/config I have defined a scaleway profile:
[profile scaleway]
endpoint_url = https://s3.fr-par.scw.cloud
region = fr-par
There was a problem hiding this comment.
I moved the endpoint to the "configure set" commands. Anyways i dont think this is used much, i would hope that with codespace we can write a test and run it to test it, including sdk commands
| export ACCESS_KEY=$(kubectl get secret end2end-management-vault-admin-creds.v1 -o jsonpath='{.data.accessKey}' | base64 -d) | ||
| export SECRET_KEY=$(kubectl get secret end2end-management-vault-admin-creds.v1 -o jsonpath='{.data.secretKey}' | base64 -d) | ||
| ``` | ||
|
|
||
| Then configure aws cli with the following command | ||
| Configure the AWS CLI: | ||
|
|
||
| ```bash | ||
| aws configure set aws_access_key_id $ACCESS_KEY | ||
| aws configure set aws_secret_access_key $SECRET_KEY | ||
| aws configure set region us-east-1 | ||
| aws configure set aws_access_key_id $ACCESS_KEY | ||
| aws configure set aws_secret_access_key $SECRET_KEY | ||
| aws configure set region us-east-1 | ||
| ``` |
There was a problem hiding this comment.
nit: should this be done by setup-e2e-env ?
There was a problem hiding this comment.
Could be but I think no need, I talked about it a bit in the other comment, but I think this should barely be used, as I would expect people to just develop a test and run the test directly, so don't really need to run s3 cli commands here. And setup-e2e-env is more for things that could be used both for codespace and github ci, than only for codespace
| // which would hang when called from Node.js exec() whose stdin pipe is | ||
| // never closed by the caller. | ||
| return execShellCommand( | ||
| `kubectl run ${podName} --image=busybox:1 --attach --rm --restart=Never --overrides='${spec}'`, |
There was a problem hiding this comment.
not the way to do this: should be done with kubectl debug instead
There was a problem hiding this comment.
what's the problem with kubectl run 🤔 The pod stays there after the test is run ?
There was a problem hiding this comment.
not a "problem" with kubectl run, just not the same use : kubectl run is designed to run a "workload" (i.e. some production stuff) while kubectl debug is designed to do exactly what you want (i.e. like running a command in another pod -even if it does not have a shell-, ...)
sure it can work -especially in CI- with kubectl run, but the future is to use kubectl debug (instead of kubectl run/kuebctl exec) for most troubleshooting procedures for exemple
There was a problem hiding this comment.
in particular kubectl debug knows how to "share" volumes already, no need to add coupling on the way the pod is started
f3ec7ad to
c55b40f
Compare
| local name="$1" | ||
| local host="$2" | ||
| local service="$3" | ||
| local port="${4:-name: http}" |
There was a problem hiding this comment.
Would it perhaps be nicer to have this be always a number and have number: ${port} below? Then we could just use 80 as the default instead of name: http
There was a problem hiding this comment.
ah ok yeah, i didnt know we could use name: http syntax when the port is 80
| TMATE_SERVER_ED25519_FINGERPRINT: ${{ secrets.TMATE_SERVER_ED25519_FINGERPRINT }} | ||
| # Mocha reporter configuration | ||
| MOCHA_FILE: /reports/test-results-[hash].xml | ||
| MOCHA_FILE: reports/test-results-[hash].xml |
There was a problem hiding this comment.
no correct, properly fixed in #2375 → to be removed from here
(and rebased on top of it, hopefully it will merge in a few minutes...)
| # PRA admin credentials (may not exist for non-PRA runs; ignore errors) | ||
| export ADMIN_PRA_ACCESS_KEY_ID=$(kubectl get secret ${ZENKO_NAME}-pra-management-vault-admin-creds.v1 -o jsonpath='{.data.accessKey}' 2>/dev/null | base64 -d 2>/dev/null || echo "") | ||
| export ADMIN_PRA_SECRET_ACCESS_KEY=$(kubectl get secret ${ZENKO_NAME}-pra-management-vault-admin-creds.v1 -o jsonpath='{.data.secretKey}' 2>/dev/null | base64 -d 2>/dev/null || echo "") | ||
|
|
||
| # --- 11. Service user credentials --- | ||
| BACKBEAT_LCBP_1_CREDS=$(kubectl get secret -l app.kubernetes.io/name=backbeat-lcbp-user-creds,app.kubernetes.io/instance=${ZENKO_NAME} -o jsonpath='{.items[0].data.backbeat-lifecycle-bp-1\.json}' | base64 -d) | ||
| BACKBEAT_LCC_1_CREDS=$(kubectl get secret -l app.kubernetes.io/name=backbeat-lcc-user-creds,app.kubernetes.io/instance=${ZENKO_NAME} -o jsonpath='{.items[0].data.backbeat-lifecycle-conductor-1\.json}' | base64 -d) | ||
| BACKBEAT_LCOP_1_CREDS=$(kubectl get secret -l app.kubernetes.io/name=backbeat-lcop-user-creds,app.kubernetes.io/instance=${ZENKO_NAME} -o jsonpath='{.items[0].data.backbeat-lifecycle-op-1\.json}' | base64 -d) | ||
| BACKBEAT_QP_1_CREDS=$(kubectl get secret -l app.kubernetes.io/name=backbeat-qp-user-creds,app.kubernetes.io/instance=${ZENKO_NAME} -o jsonpath='{.items[0].data.backbeat-qp-1\.json}' | base64 -d) | ||
| SORBET_FWD_2_ACCESSKEY=$(kubectl get secret -l app.kubernetes.io/name=sorbet-fwd-creds,app.kubernetes.io/instance=${ZENKO_NAME} -o jsonpath='{.items[0].data.accessKey}' | base64 -d) | ||
| SORBET_FWD_2_SECRETKEY=$(kubectl get secret -l app.kubernetes.io/name=sorbet-fwd-creds,app.kubernetes.io/instance=${ZENKO_NAME} -o jsonpath='{.items[0].data.secretKey}' | base64 -d) | ||
| export SERVICE_USERS_CREDENTIALS=$(echo '{"backbeat-lifecycle-bp-1":'"${BACKBEAT_LCBP_1_CREDS}"',"backbeat-lifecycle-conductor-1":'"${BACKBEAT_LCC_1_CREDS}"',"backbeat-lifecycle-op-1":'"${BACKBEAT_LCOP_1_CREDS}"',"backbeat-qp-1":'"${BACKBEAT_QP_1_CREDS}"',"sorbet-fwd-2":{"accessKey":"'"${SORBET_FWD_2_ACCESSKEY}"'","secretKey":"'"${SORBET_FWD_2_SECRETKEY}"'"}}' | jq -R) | ||
|
|
||
| # --- 12. Kafka topics for sorbet --- | ||
| SORBET_CONFIG=$(kubectl get secret -l app.kubernetes.io/name=cold-sorbet-config-e2e-azure-archive,app.kubernetes.io/instance=${ZENKO_NAME} \ | ||
| -o jsonpath='{.items[0].data.config\.json}' | base64 -di) | ||
| export KAFKA_DEAD_LETTER_TOPIC=$(echo "${SORBET_CONFIG}" | jq -r '."kafka-dead-letter-topic"') | ||
| export KAFKA_OBJECT_TASK_TOPIC=$(echo "${SORBET_CONFIG}" | jq -r '."kafka-object-task-topic"') | ||
| export KAFKA_GC_REQUEST_TOPIC=$(echo "${SORBET_CONFIG}" | jq -r '."kafka-gc-request-topic"') |
There was a problem hiding this comment.
for followup: everything we retrieve from secrets/k8s can be retrieved directly from CTST. Just needs creds and ZENKO_NAME
| _KAFKA_PF_PID=$! | ||
| timeout 10 bash -c "until ss -tlnp 2>/dev/null | grep -q ':${KAFKA_PORT}'; do sleep 0.2; done" | ||
| fi | ||
| export KAFKA_HOST_PORT="localhost:${KAFKA_PORT}" |
There was a problem hiding this comment.
another idea for followup: instead of port-forwarding here, maybe we can port-forward directly in CTST or run a command in the k8s cluster (depending on the case)
this would have the same benefit as above: less setup, tests can autonomously do what they need, as long as they know how to access zenko running in k8s → fewer variables
There was a problem hiding this comment.
ah you mean directly in the code 🤔 I haven't thought of that, it's probably possible to do
| printf 'CRR_SOURCE_INFO<<EOF\n%s\nEOF\n' "$CRR_SOURCE_INFO" >> "$GITHUB_ENV" | ||
| printf 'CRR_DESTINATION_INFO<<EOF\n%s\nEOF\n' "$CRR_DESTINATION_INFO" >> "$GITHUB_ENV" | ||
| # CTST-specific vars (only if CTST setup ran) | ||
| if [ "${SKIP_CTST:-}" != "1" ]; then |
There was a problem hiding this comment.
can't we move this inside of the existing SKIP_CTST block ?
(and wondering why we need these, even though we generate a "world" file which should contain everything needed by ctst... probably something to further cleanup there, eventually)
There was a problem hiding this comment.
Actually yeah since its already in world parameters, no need to keep it
| Then('prometheus should scrap federated metrics from DR sink', { timeout: 180000 }, async () => { | ||
| const prom = new PrometheusDriver({ | ||
| endpoint: `http://${this.parameters.PrometheusService}:9090`, | ||
| endpoint: 'http://localhost:9090', |
There was a problem hiding this comment.
should keep this.parameters.PrometheusService here : and just set the parameter to localhost.
| import Zenko from 'world/Zenko'; | ||
|
|
||
| // The mock-sorbet pod uses a distroless image (no shell, rm, find, etc.), | ||
| // so we cannot kubectl exec into it. Instead, we spin up a short-lived |
There was a problem hiding this comment.
more precisely, we can kubectl exec (for exemple kubectl exec -- sorbetctl ...): but we cannot exec to run a shell or shell commands indeed
33fb80e to
5f421ed
Compare
5f421ed to
093e162
Compare
…ace, causing codespace setup failures Issue: ZENKO-5209
…zenko cluster Issue: ZENKO-5209
…r tests following new method to run cucumber tests out of the zenko cluster Issue: ZENKO-5209
093e162 to
602bdc4
Compare
|
/approve |
|
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue ZENKO-5209. Goodbye sylvainsenechal. The following options are set: approve |
Issue: ZENKO-5209
-Cucumber tests are run directly in the github runner, no need for a pod anymore. This is quite nice for codespace testing where I got the opportunity to test it as I was making the pr and fixing the few bugs that didn't work
Future work :
Once these 2 are done, I believe it will be much easier to work on other tasks, especially everything dealing with the setup in the bash and python files