Skip to content

Commit 49f214a

Browse files
committed
Fixed smaller typos
for better SiteImprove results
1 parent ac73a3b commit 49f214a

24 files changed

+138
-138
lines changed

adoc/CloudLS_Architecture.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -428,7 +428,7 @@ namely:
428428

429429
- *Load balancing*: All OpenStack services that users and
430430
other components themselves are using are HTTP(S) interfaces based
431-
on the ReST principle. In large environments, they are
431+
on the REST principle. In large environments, they are
432432
subject to a lot of load. In large-scale setups, it is required
433433
to use load balancers in front of the API instances to distribute the
434434
incoming requests evenly. This holds also true for MySQL (RabbitMQ however
@@ -600,7 +600,7 @@ installed and assigned to it. Four major roles exist:
600600

601601
- *Management Nodes*: To run additional services such as Prometheus (a
602602
time-series database for monitoring, alerting and trending) and the ELK
603-
stack (ElasticSearch, Logstash, Kibana - a log collection and index
603+
stack (Elasticsearch, Logstash, Kibana - a log collection and index
604604
engine), further hardware is required. At least three machines per
605605
failure domain should be made available for this purpose.
606606

adoc/CloudLS_Intro.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ Some of the key features of the OpenStack cloud computing software are as follow
147147

148148
The SUSE OpenStack Cloud product is based on the upstream OpenStack project. It enables the
149149
operator to smoothly deal with the complexity of the project and control the deployment, the daily operation and
150-
the maintenace of the platform. The integrated deployment tool allows for an easy setup and deployment of
150+
the maintenance of the platform. The integrated deployment tool allows for an easy setup and deployment of
151151
the complex infrastructure. The professional support provided by SUSE ensures the provision of a stable and available platform,
152152
turning an open source project in an enterprise grade software solution.
153153

adoc/CloudLS_Operations.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -337,12 +337,12 @@ anymore.
337337
==== Variant 1: ELK
338338

339339
A variant to create centralized logging based on open source
340-
software is the _ELK_ stack. ELK is an acronym for _ElasticSearch_,
340+
software is the _ELK_ stack. ELK is an acronym for _Elasticsearch_,
341341
_Logstash_ and _Kibana_ and refers to three components that are deployed
342-
together. ElasticSearch is the indexing and search engine that received log
342+
together. Elasticsearch is the indexing and search engine that received log
343343
entries from systems. Logstash collects the log files from the target systems
344-
and sends them to ElasticSearch. Kibana is a concise and easy-to-use interface
345-
to Logstash and ElasticSearch and allows for web-based access.
344+
and sends them to Elasticsearch. Kibana is a concise and easy-to-use interface
345+
to Logstash and Elasticsearch and allows for Web-based access.
346346

347347
Although these three components are not always combined, the acronym _ELK_
348348
has become an established term for this solution. Sometimes, for example the

adoc/CloudLS_SDN.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ At a certain point in time, even the traffic passing between virtual machines
201201
in virtual networks must cross the physical borders between two systems.
202202
Virtual traffic usually uses the management network, but to ensure that
203203
management traffic of the platform and traffic from virtual networks do not
204-
mix up, all available SDN solutions use some sort of encapsulation. VxLAN and
204+
mix up, all available SDN solutions use some sort of encapsulation. VXLAN and
205205
GRE tunnels are the most common choices (both terms refer to specific
206206
technologies). Both technologies allow for the assignment of certain IT tags
207207
to individual network packets. Traffic can easily be identified as
@@ -286,7 +286,7 @@ by _neutron_, the Networking service of OpenStack.
286286

287287
==== Neutron Primer
288288

289-
Neutron is a service that offers a ReSTful API and a plugin mechanism that
289+
Neutron is a service that offers a RESTful API and a plugin mechanism that
290290
allows to load plugins for a large number of SDN implementations. In
291291
certain setups, SDN solutions can be combined. However, combining SDN
292292
solutions is a complex task and should be accompanied by expert support.
@@ -307,7 +307,7 @@ requires multiple components on different target systems to work together
307307
properly. As an example, when a host boots up, the virtual
308308
switch for SDN on it must be configured at boot time. When a new VM is
309309
started on said host, a virtual port on the local virtual switch must be
310-
created and tagged with the correct settings for VxLAN or GRE. The VM
310+
created and tagged with the correct settings for VXLAN or GRE. The VM
311311
needs the network information (IP, DNS, Routing) and additional metadata
312312
to configure itself.
313313

@@ -320,7 +320,7 @@ agents are running on the network or compute nodes.
320320
Building SDN for OpenStack environments follows the basic design tenets
321321
laid out earlier in this chapter. A typical SDN environment deployed as
322322
part of SUSE OpenStack Cloud uses Open vSwitch to create the virtual
323-
or _overlay_ network segment and VxLAN or GRE encapsulation to encapsulate
323+
or _overlay_ network segment and VXLAN or GRE encapsulation to encapsulate
324324
traffic on the _underlay_ level of the physical network, acting as
325325
management network.
326326

adoc/CloudLS_SDS.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -157,9 +157,9 @@ need for ephemeral VM storage is served using the local space
157157
on the compute nodes.
158158

159159
In addition to these two storage types, clouds offer an object storage
160-
service that allows for access via the ReSTful protocol, which is based
160+
service that allows for access via the RESTful protocol, which is based
161161
on HTTP(S). Amazon S3 is the best-known type of implementation for such
162-
a service. Users upload their asset data using the ReST protocol and can
162+
a service. Users upload their asset data using the REST protocol and can
163163
access them later from anywhere in the world.
164164

165165
This kind of storage is often used as replacement for central asset
@@ -449,7 +449,7 @@ kernel driver. This allows for enhanced performance.
449449
==== Ceph Front-Ends: Amazon S3 and OpenStack Swift
450450

451451
The third Ceph front-end refers to the other type of storage that clouds
452-
are expected to provide, which is object storage via a ReSTful protocol.
452+
are expected to provide, which is object storage via a RESTful protocol.
453453
Amazon Simple Storage Service (Amazon S3) is the most common service
454454
of its kind. OpenStack also has a solution for storing objects and making
455455
them accessible via an HTTPs protocol named _OpenStack swift_.
@@ -500,7 +500,7 @@ CephFS, the POSIX-compatible file system in Ceph, can act here as back-end
500500
for manila.
501501

502502
Finally, Ceph with the Ceph Object Gateway can act as a drop-in replacement
503-
for swift, the ReSTful object storage for asset data. The Ceph
503+
for swift, the RESTful object storage for asset data. The Ceph
504504
Object Gateway even supports authentication using
505505
the OpenStack Identity service _keystone_, so that administering the
506506
users allowed to access Ceph's Swift back-end happens using the OpenStack

adoc/SLES4SAP-HANAonKVM-15SP2.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -494,7 +494,7 @@ To double check that only the desired C-states are actually available, the follo
494494
cpupower idle-info
495495
----
496496

497-
The idle state settings can be verified by looking at the line containing`Available idle states:`.
497+
The idle state settings can be verified by looking at the line containing `Available idle states:`.
498498

499499

500500
[[_sec_irqbalance]]

adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -328,7 +328,7 @@ status* reflects this (return code 1). This happens when worker nodes are going
328328
down without any {HANA} standby nodes left. Standby nodes are designed to
329329
perform a host auto-failover for the worker functionality.
330330
Find more details on concept and implementation in manual page
331-
SAPHanaSR-ScaelOut(7).
331+
SAPHanaSR-ScaleOut(7).
332332

333333
Without any additional intervention the resource agent will wait for the {sap}
334334
internal HA cluster to repair the situation locally. An additional intervention

adoc/SLES4SAP-sap-infra-monitoring-loki.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Depending on the given directory path in our example above, the rule file has to
8585

8686
/etc/loki/rules/fake/rules.yml
8787

88-
NOTE: We are using `auth_enabled: false` and therefor the default tenant ID is `fake` which needs to be add
88+
NOTE: We are using `auth_enabled: false` and therefore the default tenant ID is `fake` which needs to be add
8989
to the path the rules are stored.
9090

9191
The example rule below will trigger a mail (vial alertmanager configuration) if the password failed after accessing via ssh.

adoc/SLES4SAP-sap-infra-monitoring-nodeexporter.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ amcli_disk_information_summary{name="32/13", vendor="TOSHIBA", product="MBF2300R
231231
amcli_disk_information_summary{name="32/12", vendor="TOSHIBA", product="MBF2300RC", port_number="4", rotational_speed="10Krpm", power_status="Active", slot="7", status="Operational", ts="1646052400" } 1
232232
----
233233

234-
And the view from the node_exporter webui:
234+
And the view from the node_exporter WebUI:
235235

236236
image::amcli-disk-info.png[amCLI disk information, collected from `amCLI` and sorted by `awk`,scaledwidth=100%,title="amCLI basic disk information"]
237237

adoc/SLES4SAP-sap-infra-monitoring-prometheus.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ drwx------ 9 prometheus prometheus 199 Nov 18 18:00 /var/lib/prometheus
8888
===== Prometheus configuration `prometheus.yml`
8989
Edit the Prometheus configuration file `/etc/prometheus/prometheus.yml` to include the scrape job configurations you want to add.
9090
In our example we have defined multiple job for different exporters. This would simplify the
91-
Grafana dashboard creation later and the Prometheus alertmanger rule definition.
91+
Grafana dashboard creation later and the Prometheus Alertmanager rule definition.
9292

9393
[source]
9494
.Job definition for Node Exporter

0 commit comments

Comments
 (0)