Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CLOUD-1167] - Remove deploymentConfig label from all templates #334

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

spolti
Copy link
Contributor

@spolti spolti commented Sep 1, 2017

No description provided.

@@ -102,13 +102,13 @@
}
],
"selector": {
"deploymentConfig": "${APPLICATION_NAME}-amq"
"application": "${APPLICATION_NAME}-amq"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these should just be ${APPLICATION_NAME}, without the -amq. If we want amq in the name, we should change the default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rcernich OK, I kept with "-amq" because it was there already.
I'll update this, test and update this PR.
Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the "-amq" suffix needs to stay there or an additional label like "component: amq" needs to be added, otherwise there's going to be overlap between different components that are part of the same application (e.g. EAP and mysql in the eap-mysql template).

That said, I believe I filed an issue once saying that we should use a multidimensional label system (multiple labels instead of a single label per resource), as that makes it easier for the ops team to select resources across different dimensions (e.g. all pods belonging to the same application, regardless of component; or selecting all database pods, regardless of application, etc.).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps, but we use application name for everything, which should prevent two applications from using the same name as the resources can't be instantiated. That said, I'm assuming that we're consistent in how we apply this. A simple test case would be to instantiate one of the eap-amq templates, then instantiate an amq template twice using "eap-app" and "eap-app-amq" for the application name and see what happens. I suspect the deployment configs will fail to be created because they will conflict with what's already running in eap (either the eap deployment or the amq deployment).

I agree that using multiple labels for selectors would be an improvement, but maybe we should save that for 2.0. I think that change might be a bit too much of a change in structure at this point in time.

@rcernich
Copy link
Contributor

Hey @spolti, it looks like these changes are out of date and can't be merged. Also, we need to address the eap-amq templates (and probably the decision/process server-amq templates too). The label application= identifies the application (i.e. all these resources were created for, say, eap-app). Because of this, we need to use different identifiers for the service selectors, perhaps using multiple labels as @luksa recommended (e.g. application=, component=amq; or application=, component=eap).

@spolti spolti force-pushed the CLOUD-1167 branch 2 times, most recently from f874c8e to a30c0ef Compare October 4, 2017 15:52
Copy link
Contributor

@rcernich rcernich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a number of other templates that need the component label applied, as well as areas where the application label is not consistent across resources created by the template.

@@ -551,7 +547,7 @@
"metadata": {
"name": "${APPLICATION_NAME}-drainer",
"labels": {
"application": "${APPLICATION_NAME}"
"application": "${APPLICATION_NAME}-drainer"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@spolti, we need to change this back to just ${APPLICATION_NAME}. The application label is intended to be used to identify all resources created by this template so they can be easily accessed via -l application=xyz.

@@ -368,7 +364,7 @@
"metadata": {
"name": "${APPLICATION_NAME}-drainer",
"labels": {
"application": "${APPLICATION_NAME}"
"application": "${APPLICATION_NAME}-drainer"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment above about application label.

@@ -377,13 +377,13 @@
}
],
"selector": {
"deploymentConfig": "${APPLICATION_NAME}-mysql"
"application": "${APPLICATION_NAME}-mysql"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed adding a component label for identifying other deployments in the template.

"template": {
"metadata": {
"name": "${APPLICATION_NAME}-drainer",
"labels": {
"deploymentConfig": "${APPLICATION_NAME}-drainer",
"application": "${APPLICATION_NAME}"
"application": "${APPLICATION_NAME}-drainer"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rcernich @spolti If I'm getting the logic for listing resources created by single template, here should also be "${APPLICATION_NAME}", right? And with it probably some selector as component=drainer, so two deploymentconfigs do not create selectors with same label...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants