-
Notifications
You must be signed in to change notification settings - Fork 91
Support N MongoDB shards #2219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support N MongoDB shards #2219
Conversation
Hello williamlardier,My role is to assist you with the merge of this Available options
Available commands
Status report is not available. |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
|
365e3d4 to
b1d1cdc
Compare
| {{- if gt $.Values.shards 1 }} | ||
| shard: {{ $i | quote }} | ||
| {{- end }} | ||
| {{ toYaml $.Values.shardsvr.persistence.selector.matchLabels | indent 12 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- why change this line, we do not use this variable?
- instead of patching the chart, could we not set this
shardsvr.persistence.selectorvariable to the appropriate value to get the same result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable is used, for instance we have this in the sts after deployment:
selector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/part-of: zenko
shard: "1"
But it's actually set from the build.sh file:
--set 'shardsvr.persistence.selector.matchLabels.app\.kubernetes\.io/part-of=zenko' \
This means, with the change, that we need to access the .matchLabels and not the parent variable. This is also something we use as a label selector in the artesca installer, so we cannot just remove it.
So in short:
- we pass this info when rendering the yamls with helm
- the patch is necessary because we need to add these labels for artesca
- now that we also have the shard, we cannot just pass a static value, we must compute it in the yaml dynamically
- this change forces us to access the .matchLabels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll check if we can add this to the values.yaml file
we don't use the values.yaml file (the one we have the default, part of the chart), but specify the values in argument to helm call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My comment was updated in between, indeed, but we must still specify the shard number, so we need to edit this part, and also, we want to keep the selectors we had already. So unless you think this is wrong, to me the change is fine?
solution-base/mongodb/charts/mongodb-sharded/templates/shard/shard-data-podmonitor.yaml
Outdated
Show resolved
Hide resolved
d3be7fc to
d3f8a9d
Compare
| sed -i "s/MONGODB_SHARDSERVER_RAM_LIMIT/${MONGODB_SHARDSERVER_RAM_LIMIT}/g" $DIR/_build/root/deploy/* | ||
| sed -i "s/MONGODB_SHARDSERVER_RAM_REQUEST/${MONGODB_SHARDSERVER_RAM_REQUEST}/g" $DIR/_build/root/deploy/* | ||
| sed -i "s/MONGODB_MONGOS_RAM_REQUEST/${MONGODB_MONGOS_RAM_REQUEST}/g" $DIR/_build/root/deploy/* | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for reviewers: this was removed because we had this logic twice in the function, maybe a rebase issue at some point. See above.
f1098b9 to
fa66847
Compare
9062ee2 to
440c8b5
Compare
|
Added a commit to remove the matchLabel for the shard0, so that we do not require any downtime during shard expansion. |
8944786 to
ed4be0c
Compare
Incorrect fix versionThe
Considering where you are trying to merge, I ignored possible hotfix versions and I expected to find:
Please check the |
We add support for multiple shards. The Kustomization file is thus removed, and we instead generate it based on the current configuration. Issue: ZENKO-4641
Added to Zenko as an annotation, but now compatible with multiple shards. Issue: ZENKO-4641
This implementation tries to use a single MongoDB mongod process per instance so that we maximize RAM usage and performances. Issue: ZENKO-4641
Selectors should now be updated to consider the current shard Issue: ZENKO-4641
Issue: ZENKO-4641
- Mutualize http tests in a single CI run. - All deployments must now be sharded, hence testing separately http endpoints is not enough. - We also mutualize the runner to enable HTTPs after the initial deployments. This reduces costs, and ensure basic set of tests are executed when using 2 shards. Issue: ZENKO-4641
We useed to re-run Vault functional tests in Zenko. This is not needed, as covered by CTST test suite now. Issue: ZENKO-4641
- We want to support one P-S-S topology ever 3 servers. - With 6+ nodes support added, we now need to ensure the replicas of configsvr & shardsvr are not exceeding 3. Issue: ZENKO-4641
Will be useful for CI testing only Issue: ZENKO-4641
The previous alert was not properly accounting for multiple shards, leading to alerts with multi shards even if we have a nominal state. Issue: ZENKO-4641
Issue: ZENKO-4641
To avoid having to delete the STS for existing deployments, we must avoid changing the matchSelectors for the shard 0. Removing only the shard selector is not enough, as we risk selecting volumes from other shards. Instead, we choose to modify the app name to include, only for new shards, the new shard ID. Shard 0 won't be updated, and the new shards will have their own labels. Issue: ZENKO-4641
ed4be0c to
1f703a1
Compare
|
/approve |
|
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue ZENKO-4641. Goodbye williamlardier. The following options are set: approve |
Here, we only support deployment of multiple shards. Upgrades is managed by upper layers for now.
Issue: ZENKO-4641