-
Notifications
You must be signed in to change notification settings - Fork 91
Deploy metadata in end to end tests #2186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,47 @@ | ||
| #!/bin/sh | ||
|
|
||
| set -exu | ||
|
|
||
| . "$(dirname $0)/common.sh" | ||
|
|
||
| # create a separate namespace for metadata | ||
| kubectl create namespace metadata | ||
|
|
||
| # clone the metadata repository | ||
| git init metadata | ||
| cd metadata | ||
| git fetch --depth 1 --no-tags https://${GIT_ACCESS_TOKEN}@github.com/scality/metadata.git | ||
| git checkout FETCH_HEAD | ||
|
|
||
| # install metadata chart in a separate namespace | ||
| cd helm | ||
| helm dependency update cloudserver/ | ||
| helm install -n metadata \ | ||
| --set metadata.persistentVolume.storageClass='' \ | ||
| --set metadata.sproxyd.persistentVolume.storageClass='' \ | ||
| s3c cloudserver/ | ||
|
|
||
| # wait for the repds to be created | ||
| kubectl -n metadata rollout status --watch --timeout=300s statefulset/s3c-metadata-repd | ||
| # wait for all repd pods to start serving admin API ports | ||
| wait_for_all_pods_behind_services metadata-repd metadata "91*" 60 | ||
|
|
||
| # current chart uses an old version of bucketd that has issues reconnecting to the repd | ||
| # when bucketd is started first. Restarting bucketd after repd is ready. | ||
| kubectl -n metadata rollout restart deployment/s3c-metadata-bucketd | ||
| # wait for the bucketd pods to be created | ||
| kubectl -n metadata rollout status --watch --timeout=300s deploy/s3c-metadata-bucketd | ||
| # wait for all bucketd pods to start serving port 9000 | ||
| wait_for_all_pods_behind_services metadata-bucketd metadata 9000 60 | ||
|
|
||
| # manually add "s3c.local" to the rest endpoints list as it's not configurable in the chart | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why do we need i did not need such patch on my artesca cluster (in namespace
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Cloudserver responds with an error when the host used is not declared in its config file. I've tried with the k8s service endpoint and it doesn't work.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe this comes from the namespace ("metadata") you deploy to, which may not be taken into account?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No even with the default namespace i got the same behaviour, which to me, is the expected behaviour. The normal cloudserver we deploy in Zenko has all the k8s endpoints in its config. |
||
| current_config=$(kubectl get configmap/s3c-cloudserver-config-json -n metadata -o jsonpath='{.data.config\.json}') | ||
| updated_config=$(echo "$current_config" | jq '.restEndpoints["s3c.local"] = "us-east-1"') | ||
| kubectl patch configmap/s3c-cloudserver-config-json -n metadata --type='merge' -p="$(jq -n --arg v "$updated_config" '{"data": {"config.json": $v}}')" | ||
|
|
||
| # restarting cloudserver to take the new configmap changes into account | ||
| kubectl -n metadata rollout restart deployment/s3c-cloudserver | ||
| # wait for the cloudserver pods to be created | ||
| kubectl -n metadata rollout status --watch --timeout=300s deployment/s3c-cloudserver | ||
| # wait for the cloudserver pods to start serving port 8000 | ||
| wait_for_all_pods_behind_services cloudserver metadata 8000 60 | ||
Uh oh!
There was an error while loading. Please reload this page.