You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/accessing-services.asciidoc
+12-12
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
[id="{p}-accessing-elastic-services"]
2
-
== How to access Elastic Stack services
2
+
== Accessing Elastic Stack services
3
3
4
4
To access the Elastic Stack services, you will need to retrieve:
5
5
@@ -21,7 +21,7 @@ All Elastic Stack resources deployed by the ECK Operator are secured by default.
21
21
[id="{p}-authentication"]
22
22
==== Authentication
23
23
24
-
To access Elasticsearch and Kibana, the operator manages a default user named `elastic` with the `superuser` role. Its password is stored in a `Secret` named `<name>-elastic-user`.
24
+
To access Elasticsearch, Kibana or APM Server, the operator manages a default user named `elastic` with the `superuser` role. Its password is stored in a `Secret` named `<name>-elastic-user`.
25
25
26
26
[source,sh]
27
27
----
@@ -33,13 +33,13 @@ To access Elasticsearch and Kibana, the operator manages a default user named `e
33
33
[id="{p}-services"]
34
34
=== Services
35
35
36
-
You can access Elasticsearch and Kibana by using native Kubernetes services that are not reachable from the public Internet by default.
36
+
You can access Elasticsearch, Kibana or APM Server by using native Kubernetes services that are not reachable from the public Internet by default.
37
37
38
38
[float]
39
39
[id="{p}-kubernetes-service"]
40
40
==== Managing Kubernetes services
41
41
42
-
For each resource, `Elasticsearch`or `Kibana`, the operator manages a Kubernetes service named `<name>-[es|kb]-http`, which is of type `ClusterIP` by default. `ClusterIP` exposes the service on a cluster-internal IP and makes the service only reachable from the cluster.
42
+
For each resource, `Elasticsearch`, `Kibana` or `ApmServer`, the operator manages a Kubernetes service named `<name>-[es|kb|apm]-http`, which is of type `ClusterIP` by default. `ClusterIP` exposes the service on a cluster-internal IP and makes the service only reachable from the cluster.
You can expose services in link:https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types[different ways] by specifying an `http.service.spec.type` in the `spec` of the resource manifest.
59
-
On cloud providers which support external load balancers, you can set the `type` field to `LoadBalancer` to provision a load balancer for the `Service`, and populate the column `EXTERNAL-IP` after a short delay. Depending on the cloud provider, it may incur charges.
59
+
On cloud providers which support external load balancers, you can set the `type` field to `LoadBalancer` to provision a load balancer for the `Service`, and populate the column `EXTERNAL-IP` after a short delay. Depending on the cloud provider, it may incur costs.
This section only covers TLS certificates for the HTTP layer. Those for the transport layer used for internal communication between nodes in a cluster are managed by ECK and are not configurable.
90
+
This section only covers TLS certificates for the HTTP layer. Those for the transport layer used for Elasticsearch internal communication between nodes in a cluster are managed by ECK and are not configurable.
91
91
92
92
[float]
93
93
[id="{p}-default-self-signed-certificate"]
94
94
==== Default self-signed certificate
95
95
96
-
By default, the operator manages a self-signed certificate with a custom CA for Elasticsearchand Kibana.
97
-
The CA, the certificate and the private key are each stored in a `Secret`.
96
+
By default, the operator manages a self-signed certificate with a custom CA for Elasticsearch, Kibana and APM Server.
97
+
The CA, the certificate and the private key are each stored in a separate `Secret`.
98
98
99
99
[source,sh]
100
100
----
@@ -144,8 +144,8 @@ You can bring your own certificate to configure TLS to ensure that communication
144
144
145
145
Create a Kubernetes secret with:
146
146
147
-
- tls.crt: the certificate (or a chain).
148
-
- tls.key: the private key to the first certificate in the certificate chain.
147
+
- `tls.crt`: the certificate (or a chain).
148
+
- `tls.key`: the private key to the first certificate in the certificate chain.
149
149
150
150
[source,sh]
151
151
----
@@ -167,7 +167,7 @@ spec:
167
167
[id="{p}-disable-tls"]
168
168
==== Disable TLS
169
169
170
-
You can explicitly disable TLS for Kibana or APM Server if you want to.
170
+
You can explicitly disable TLS for Kibana or APM Server.
171
171
172
172
[source,yaml]
173
173
----
@@ -225,7 +225,7 @@ Now you should get this message:
225
225
curl: (51) SSL: no alternative certificate subject name matches target host name '35.198.131.115'
226
226
----
227
227
228
-
Add the external IP of the service to the SANs of the certificate in the same Elasticsearch resource YAML manifest used for creating the cluster and apply it again using `kubectl`.
228
+
Add the external IP of the service to the SANs of the certificate in the same Elasticsearch resource YAML manifest used to create the cluster and apply it again using `kubectl`.
Copy file name to clipboardExpand all lines: docs/advanced-node-scheduling.asciidoc
+12-12
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,19 @@
1
1
[id="{p}-advanced-node-scheduling"]
2
2
3
-
== Advanced Elasticsearch node scheduling
3
+
===Advanced Elasticsearch node scheduling
4
4
5
5
Elastic Cloud on Kubernetes (ECK) offers full control over cluster nodes scheduling by combining Elasticsearch configuration with Kubernetes scheduling options:
* <<{p}-affinity-options,Pod affinity and anti-affinity>>
9
+
* <<{p}-availability-zone-awareness,Availability zone and rack awareness>>
10
+
* <<{p}-hot-warm-topologies,Hot-warm topologies>>
11
11
12
12
These features can be combined together, to deploy a production-grade Elasticsearch cluster.
13
13
14
14
[float]
15
15
[id="{p}-define-elasticsearch-nodes-roles"]
16
-
=== Define Elasticsearch nodes roles
16
+
==== Define Elasticsearch nodes roles
17
17
18
18
You can configure Elasticsearch nodes with link:https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html[one or multiple roles]. This allows you to describe an Elasticsearch cluster with 3 dedicated master nodes, for example:
19
19
@@ -43,12 +43,12 @@ spec:
43
43
44
44
[float]
45
45
[id="{p}-affinity-options"]
46
-
=== Affinity options
46
+
==== Affinity options
47
47
48
48
You can setup various link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity[affinity and anti-affinity options] through the `podTemplate` section of the Elasticsearch resource specification.
49
49
50
50
[float]
51
-
==== A single Elasticsearch node per Kubernetes host (default)
51
+
===== A single Elasticsearch node per Kubernetes host (default)
52
52
53
53
To avoid scheduling several Elasticsearch nodes from the same cluster on the same host, use a `podAntiAffinity` rule based on the hostname and the cluster name label:
54
54
@@ -120,7 +120,7 @@ spec:
120
120
----
121
121
122
122
[float]
123
-
==== Local Persistent Volume constraints
123
+
===== Local Persistent Volume constraints
124
124
125
125
By default, volumes can be bound to a pod before the pod gets scheduled to a particular node. This can be a problem if the PersistentVolume can only be accessed from a particular host or set of hosts. Local persistent volumes are a good example: they are accessible from a single host. If the pod gets scheduled to a different host based on any affinity or anti-affinity rule, the volume may not be available.
To restrict the scheduling to a particular set of nodes based on labels, use a link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[NodeSelector].
143
143
The following example schedules Elasticsearch pods on Kubernetes nodes tagged with both labels `diskType: ssd` and `environment: production`.
@@ -197,7 +197,7 @@ This example restricts Elasticsearch nodes to be scheduled on Kubernetes hosts t
197
197
198
198
[float]
199
199
[id="{p}-availability-zone-awareness"]
200
-
=== Availability zone awareness
200
+
==== Availability zone awareness
201
201
202
202
By combining link:https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html#allocation-awareness[Elasticsearch shard allocation awareness] with link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature[Kubernetes node affinity], you can setup an availability zone-aware Elasticsearch cluster:
203
203
@@ -265,7 +265,7 @@ This example relies on:
265
265
266
266
[float]
267
267
[id="{p}-hot-warm-topologies"]
268
-
=== Hot-warm topologies
268
+
==== Hot-warm topologies
269
269
270
270
By combining link:https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html#allocation-awareness[Elasticsearch shard allocation awareness] with link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature[Kubernetes node affinity], you can setup an Elasticsearch cluster with hot-warm topology:
Copy file name to clipboardExpand all lines: docs/apm.asciidoc
+12-11
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ NOTE: The current Docker image of the APM Server must run as `root` or with the
19
19
[id="{p}-apm-eck-managed-es"]
20
20
=== Use an Elasticsearch cluster managed by ECK
21
21
22
-
When both the APM Server and Elasticsearch are managed by ECK it allows a smooth and secured integration between the two. The output configuration of the APM Server is setup automatically to establish a trust relationship with Elasticsearch.
22
+
Managing both APM Server and Elasticsearch by ECK allows a smooth and secured integration between the two. The output configuration of the APM Server is setup automatically to establish a trust relationship with Elasticsearch.
23
23
24
24
. To deploy an APM Server and connect it to the cluster `quickstart` created in the link:k8s-quickstart.html[quickstart], apply the following specification:
25
25
+
@@ -40,28 +40,29 @@ EOF
40
40
----
41
41
+
42
42
NOTE: Deploying the APM Server and Elasticsearch in two different namespaces is currently not supported.
43
-
+
44
-
. Monitor APM Server deployment
43
+
44
+
. Monitor APM Server deployment.
45
45
+
46
46
You can retrieve details about the APM Server instance:
47
-
47
+
+
48
48
[source,sh]
49
49
----
50
50
kubectl get apmservers
51
51
----
52
-
52
+
+
53
53
[source,sh]
54
54
----
55
55
NAME HEALTH NODES VERSION AGE
56
56
apm-server-quickstart green 1 7.2.0 8m
57
57
----
58
+
+
58
59
And you can list all the Pods belonging to a given deployment:
59
-
60
+
+
60
61
[source,sh]
61
62
----
62
63
kubectl get pods --selector='apm.k8s.elastic.co/name=apm-server-quickstart'
63
64
----
64
-
65
+
+
65
66
[source,sh]
66
67
----
67
68
NAME READY STATUS RESTARTS AGE
@@ -110,7 +111,7 @@ The APM Server keystore can be used to store sensitive settings in the APM Serve
. In the specification of the APM Server add a reference to the previously created secret within a `spec.secureSettings` section. Then reference the key in the APM Server configuration as it is described in the https://www.elastic.co/guide/en/apm/server/current/keystore.html[following documentation].
114
+
. In the specification of the APM Server add a reference to the previously created secret within a `spec.secureSettings` section. Then reference the key in the APM Server configuration as it is described in the https://www.elastic.co/guide/en/apm/server/current/keystore.html[Secrets keystore for secure settings].
114
115
+
115
116
[source,yaml]
116
117
----
@@ -136,7 +137,7 @@ spec:
136
137
137
138
Now that you know how to use the APM keystore and customize the server configuration, you can manually configure a secured connection to an existing Elasticsearch cluster.
138
139
139
-
. Create a secret with the Elasticsearch CA
140
+
. Create a secret with the Elasticsearch CA.
140
141
+
141
142
First, you need to store the certificate authority of the Elasticsearch cluster in a secret:
142
143
+
@@ -145,7 +146,7 @@ First, you need to store the certificate authority of the Elasticsearch cluster
0 commit comments