Skip to content

Commit 8280d41

Browse files
authored
Backport the doc directory from master to 0.9 (#1424)
* Backport doc changes from master to 0.9 * Add relnotes * Remove extra spaces * Replace master with 0.9
1 parent 09e2417 commit 8280d41

9 files changed

+177
-138
lines changed

docs/accessing-services.asciidoc

+12-12
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[id="{p}-accessing-elastic-services"]
2-
== How to access Elastic Stack services
2+
== Accessing Elastic Stack services
33

44
To access the Elastic Stack services, you will need to retrieve:
55

@@ -21,7 +21,7 @@ All Elastic Stack resources deployed by the ECK Operator are secured by default.
2121
[id="{p}-authentication"]
2222
==== Authentication
2323

24-
To access Elasticsearch and Kibana, the operator manages a default user named `elastic` with the `superuser` role. Its password is stored in a `Secret` named `<name>-elastic-user`.
24+
To access Elasticsearch, Kibana or APM Server, the operator manages a default user named `elastic` with the `superuser` role. Its password is stored in a `Secret` named `<name>-elastic-user`.
2525

2626
[source,sh]
2727
----
@@ -33,13 +33,13 @@ To access Elasticsearch and Kibana, the operator manages a default user named `e
3333
[id="{p}-services"]
3434
=== Services
3535

36-
You can access Elasticsearch and Kibana by using native Kubernetes services that are not reachable from the public Internet by default.
36+
You can access Elasticsearch, Kibana or APM Server by using native Kubernetes services that are not reachable from the public Internet by default.
3737

3838
[float]
3939
[id="{p}-kubernetes-service"]
4040
==== Managing Kubernetes services
4141

42-
For each resource, `Elasticsearch` or `Kibana`, the operator manages a Kubernetes service named `<name>-[es|kb]-http`, which is of type `ClusterIP` by default. `ClusterIP` exposes the service on a cluster-internal IP and makes the service only reachable from the cluster.
42+
For each resource, `Elasticsearch`, `Kibana` or `ApmServer`, the operator manages a Kubernetes service named `<name>-[es|kb|apm]-http`, which is of type `ClusterIP` by default. `ClusterIP` exposes the service on a cluster-internal IP and makes the service only reachable from the cluster.
4343

4444
[source,sh]
4545
----
@@ -56,7 +56,7 @@ hulk-kb-http ClusterIP 10.19.247.151 <none> 5601:31380/T
5656
==== Allowing public access
5757

5858
You can expose services in link:https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types[different ways] by specifying an `http.service.spec.type` in the `spec` of the resource manifest.
59-
On cloud providers which support external load balancers, you can set the `type` field to `LoadBalancer` to provision a load balancer for the `Service`, and populate the column `EXTERNAL-IP` after a short delay. Depending on the cloud provider, it may incur charges.
59+
On cloud providers which support external load balancers, you can set the `type` field to `LoadBalancer` to provision a load balancer for the `Service`, and populate the column `EXTERNAL-IP` after a short delay. Depending on the cloud provider, it may incur costs.
6060

6161
[source,yaml]
6262
----
@@ -87,14 +87,14 @@ hulk-kb-http LoadBalancer 10.19.247.151 35.242.197.228 5601:31380/T
8787
[id="{p}-tls-certificates"]
8888
=== TLS Certificates
8989

90-
This section only covers TLS certificates for the HTTP layer. Those for the transport layer used for internal communication between nodes in a cluster are managed by ECK and are not configurable.
90+
This section only covers TLS certificates for the HTTP layer. Those for the transport layer used for Elasticsearch internal communication between nodes in a cluster are managed by ECK and are not configurable.
9191

9292
[float]
9393
[id="{p}-default-self-signed-certificate"]
9494
==== Default self-signed certificate
9595

96-
By default, the operator manages a self-signed certificate with a custom CA for Elasticsearch and Kibana.
97-
The CA, the certificate and the private key are each stored in a `Secret`.
96+
By default, the operator manages a self-signed certificate with a custom CA for Elasticsearch, Kibana and APM Server.
97+
The CA, the certificate and the private key are each stored in a separate `Secret`.
9898

9999
[source,sh]
100100
----
@@ -144,8 +144,8 @@ You can bring your own certificate to configure TLS to ensure that communication
144144

145145
Create a Kubernetes secret with:
146146

147-
- tls.crt: the certificate (or a chain).
148-
- tls.key: the private key to the first certificate in the certificate chain.
147+
- `tls.crt`: the certificate (or a chain).
148+
- `tls.key`: the private key to the first certificate in the certificate chain.
149149

150150
[source,sh]
151151
----
@@ -167,7 +167,7 @@ spec:
167167
[id="{p}-disable-tls"]
168168
==== Disable TLS
169169

170-
You can explicitly disable TLS for Kibana or APM Server if you want to.
170+
You can explicitly disable TLS for Kibana or APM Server.
171171

172172
[source,yaml]
173173
----
@@ -225,7 +225,7 @@ Now you should get this message:
225225
curl: (51) SSL: no alternative certificate subject name matches target host name '35.198.131.115'
226226
----
227227

228-
Add the external IP of the service to the SANs of the certificate in the same Elasticsearch resource YAML manifest used for creating the cluster and apply it again using `kubectl`.
228+
Add the external IP of the service to the SANs of the certificate in the same Elasticsearch resource YAML manifest used to create the cluster and apply it again using `kubectl`.
229229

230230
[source,yaml]
231231
----

docs/advanced-node-scheduling.asciidoc

+12-12
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
[id="{p}-advanced-node-scheduling"]
22

3-
== Advanced Elasticsearch node scheduling
3+
=== Advanced Elasticsearch node scheduling
44

55
Elastic Cloud on Kubernetes (ECK) offers full control over cluster nodes scheduling by combining Elasticsearch configuration with Kubernetes scheduling options:
66

7-
* <<{p}-define-elasticsearch-nodes-roles,define Elasticsearch nodes roles>>
8-
* <<{p}-affinity-options,pod affinity and anti-affinity>>
9-
* <<{p}-availability-zone-awareness,availability zone and rack awareness>>
10-
* <<{p}-hot-warm-topologies,hot-warm topologies>>
7+
* <<{p}-define-elasticsearch-nodes-roles,Define Elasticsearch nodes roles>>
8+
* <<{p}-affinity-options,Pod affinity and anti-affinity>>
9+
* <<{p}-availability-zone-awareness,Availability zone and rack awareness>>
10+
* <<{p}-hot-warm-topologies,Hot-warm topologies>>
1111

1212
These features can be combined together, to deploy a production-grade Elasticsearch cluster.
1313

1414
[float]
1515
[id="{p}-define-elasticsearch-nodes-roles"]
16-
=== Define Elasticsearch nodes roles
16+
==== Define Elasticsearch nodes roles
1717

1818
You can configure Elasticsearch nodes with link:https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html[one or multiple roles]. This allows you to describe an Elasticsearch cluster with 3 dedicated master nodes, for example:
1919

@@ -43,12 +43,12 @@ spec:
4343

4444
[float]
4545
[id="{p}-affinity-options"]
46-
=== Affinity options
46+
==== Affinity options
4747

4848
You can setup various link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity[affinity and anti-affinity options] through the `podTemplate` section of the Elasticsearch resource specification.
4949

5050
[float]
51-
==== A single Elasticsearch node per Kubernetes host (default)
51+
===== A single Elasticsearch node per Kubernetes host (default)
5252

5353
To avoid scheduling several Elasticsearch nodes from the same cluster on the same host, use a `podAntiAffinity` rule based on the hostname and the cluster name label:
5454

@@ -120,7 +120,7 @@ spec:
120120
----
121121

122122
[float]
123-
==== Local Persistent Volume constraints
123+
===== Local Persistent Volume constraints
124124

125125
By default, volumes can be bound to a pod before the pod gets scheduled to a particular node. This can be a problem if the PersistentVolume can only be accessed from a particular host or set of hosts. Local persistent volumes are a good example: they are accessible from a single host. If the pod gets scheduled to a different host based on any affinity or anti-affinity rule, the volume may not be available.
126126

@@ -137,7 +137,7 @@ volumeBindingMode: WaitForFirstConsumer
137137
----
138138

139139
[float]
140-
==== Node affinity
140+
===== Node affinity
141141

142142
To restrict the scheduling to a particular set of nodes based on labels, use a link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[NodeSelector].
143143
The following example schedules Elasticsearch pods on Kubernetes nodes tagged with both labels `diskType: ssd` and `environment: production`.
@@ -197,7 +197,7 @@ This example restricts Elasticsearch nodes to be scheduled on Kubernetes hosts t
197197

198198
[float]
199199
[id="{p}-availability-zone-awareness"]
200-
=== Availability zone awareness
200+
==== Availability zone awareness
201201

202202
By combining link:https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html#allocation-awareness[Elasticsearch shard allocation awareness] with link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature[Kubernetes node affinity], you can setup an availability zone-aware Elasticsearch cluster:
203203

@@ -265,7 +265,7 @@ This example relies on:
265265

266266
[float]
267267
[id="{p}-hot-warm-topologies"]
268-
=== Hot-warm topologies
268+
==== Hot-warm topologies
269269

270270
By combining link:https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html#allocation-awareness[Elasticsearch shard allocation awareness] with link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature[Kubernetes node affinity], you can setup an Elasticsearch cluster with hot-warm topology:
271271

docs/apm.asciidoc

+12-11
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ NOTE: The current Docker image of the APM Server must run as `root` or with the
1919
[id="{p}-apm-eck-managed-es"]
2020
=== Use an Elasticsearch cluster managed by ECK
2121

22-
When both the APM Server and Elasticsearch are managed by ECK it allows a smooth and secured integration between the two. The output configuration of the APM Server is setup automatically to establish a trust relationship with Elasticsearch.
22+
Managing both APM Server and Elasticsearch by ECK allows a smooth and secured integration between the two. The output configuration of the APM Server is setup automatically to establish a trust relationship with Elasticsearch.
2323

2424
. To deploy an APM Server and connect it to the cluster `quickstart` created in the link:k8s-quickstart.html[quickstart], apply the following specification:
2525
+
@@ -40,28 +40,29 @@ EOF
4040
----
4141
+
4242
NOTE: Deploying the APM Server and Elasticsearch in two different namespaces is currently not supported.
43-
+
44-
. Monitor APM Server deployment
43+
44+
. Monitor APM Server deployment.
4545
+
4646
You can retrieve details about the APM Server instance:
47-
47+
+
4848
[source,sh]
4949
----
5050
kubectl get apmservers
5151
----
52-
52+
+
5353
[source,sh]
5454
----
5555
NAME HEALTH NODES VERSION AGE
5656
apm-server-quickstart green 1 7.2.0 8m
5757
----
58+
+
5859
And you can list all the Pods belonging to a given deployment:
59-
60+
+
6061
[source,sh]
6162
----
6263
kubectl get pods --selector='apm.k8s.elastic.co/name=apm-server-quickstart'
6364
----
64-
65+
+
6566
[source,sh]
6667
----
6768
NAME READY STATUS RESTARTS AGE
@@ -110,7 +111,7 @@ The APM Server keystore can be used to store sensitive settings in the APM Serve
110111
kubectl create secret generic apm-secret-settings --from-literal=ES_PASSWORD=asecretpassword
111112
----
112113

113-
. In the specification of the APM Server add a reference to the previously created secret within a `spec.secureSettings` section. Then reference the key in the APM Server configuration as it is described in the https://www.elastic.co/guide/en/apm/server/current/keystore.html[following documentation].
114+
. In the specification of the APM Server add a reference to the previously created secret within a `spec.secureSettings` section. Then reference the key in the APM Server configuration as it is described in the https://www.elastic.co/guide/en/apm/server/current/keystore.html[Secrets keystore for secure settings].
114115
+
115116
[source,yaml]
116117
----
@@ -136,7 +137,7 @@ spec:
136137

137138
Now that you know how to use the APM keystore and customize the server configuration, you can manually configure a secured connection to an existing Elasticsearch cluster.
138139

139-
. Create a secret with the Elasticsearch CA
140+
. Create a secret with the Elasticsearch CA.
140141
+
141142
First, you need to store the certificate authority of the Elasticsearch cluster in a secret:
142143
+
@@ -145,7 +146,7 @@ First, you need to store the certificate authority of the Elasticsearch cluster
145146
kubectl create secret generic es-ca --from-file=tls.crt=elasticsearch-ca.crt
146147
----
147148
+
148-
Note: the file `elasticsearch-ca.crt` must contain the CA certificate of the Elasticsearch cluster you want to use with the APM Server.
149+
NOTE: the file `elasticsearch-ca.crt` must contain the CA certificate of the Elasticsearch cluster you want to use with the APM Server.
149150

150151
. You can then mount this secret using the Pod template, and reference the file in the `config` of the APM Server.
151152
+
@@ -231,4 +232,4 @@ This token is stored in a secret named `{APM-server-name}-apm-token` and can be
231232
kubectl get secret/apm-server-quickstart-apm-token -o go-template='{{index .data "secret-token" | base64decode}}'
232233
----
233234

234-
For more information about the APM Server, see https://www.elastic.co/guide/en/apm/server/current/index.html[APM Server Reference].
235+
For more information, see https://www.elastic.co/guide/en/apm/server/current/index.html[APM Server Reference].

0 commit comments

Comments
 (0)