Skip to content

Commit 75699d4

Browse files
committed
Implemented edits from doc review
According to style checker, fixed some wording and style, shortened sentences.
1 parent 56d7ee9 commit 75699d4

File tree

1 file changed

+16
-9
lines changed

1 file changed

+16
-9
lines changed

adoc/SLES4SAP-HANAonKVM-15SP5.adoc

Lines changed: 16 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -519,7 +519,7 @@ cpupower -c all info
519519
Modern processors also attempt to save power when they are idle, by switching to a lower power state.
520520
Unfortunately, this incurs latency when switching in and out of these states.
521521

522-
To avoid that, and to achieve better and more consistent performance, the CPUs should not be allowed to switch those power saving modes (known as *C-states*) and should stay in normal operation mode all the time.
522+
To avoid that, and to achieve better and more consistent performance, the CPUs should not be allowed to switch those power saving modes (known as *C-states*). This means it should stay in normal operation mode all the time.
523523
Therefore, it is recommended to only use the state *C0*.
524524

525525
This can be enforced by adding the following parameters to the kernel boot command line: `intel_idle.max_cstate=0`.
@@ -712,7 +712,8 @@ This means that, in total, there needs to be the following number of huge pages:
712712

713713
Such number must be passed to the host kernel command line parameter on boot (that is `hugepages=3758`, see <<_sec_technical_explanation_of_the_above_described_configuration_settings>>).
714714

715-
Both the total amount of memory the guest VM should use and the fact that such memory must come from 1 GiB huge pages need to be specified in the guest VM configuration file. This means that the total available memory is the total of all configured 1GiB sized hugepages on the host (in KiB).
715+
The guest VM configuration file must specify both the total memory the VM will use and that the memory must come from 1 GiB huge pages.
716+
This means that the total available memory is the total of all configured 1GiB sized huge pages on the host (in KiB).
716717

717718
You must also ensure that the `memory` and the `currentMemory` element have the same value. This is to disable memory ballooning, which, if enabled, would cause unacceptable latency:
718719

@@ -734,7 +735,7 @@ You must also ensure that the `memory` and the `currentMemory` element have the
734735
.Memory Unit
735736
[NOTE]
736737
====
737-
The memory unit can be set to GiB to ease the memory computations.
738+
The memory unit can be set to GiB to simplify memory calculations.
738739
====
739740

740741
[[_sec_vcpu_and_vnuma_topology]]
@@ -755,7 +756,7 @@ Also refer to <<_sec_memory_backing>> and <<_sec_memory_sizing>> of the document
755756
** each NUMA cell of the guest VM has 56 vCPUs.
756757
** the distances between the cells are identical to those of the physical hardware (as per the output of the command `numactl --hardware`).
757758

758-
The examples below show configurationsnipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
759+
The examples below show configuration snipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
759760

760761
----
761762
<domain type='kvm'>
@@ -839,7 +840,8 @@ For example, assuming that the first hyperthread sibling pair is CPU 0 and CPU 1
839840

840841
It is recommended to pin both the various sibling pairs of vCPUs to (the corresponding) sibling pairs of host CPUs.
841842
For example, vCPU 0 should be pinned to pCPU 0 and 112, and the same applies to vCPU 1.
842-
As far as both the vCPUs always run on the same physical core, the host scheduler is allowed to execute them on either thread, for example in case only one is free while the other is busy executing host or hypervisor activities.
843+
As long as both vCPUs always run on the same physical core, the host scheduler can execute them on either thread, for instance,
844+
if one is free while the other is busy with host or hypervisor activities.
843845

844846
Using the above information, the CPU and memory pinning section of the guest VM XML can be created.
845847
Below find a practical example based on the hypothetical example above.
@@ -852,7 +854,7 @@ Make sure to take note of the following configuration components:
852854
** The `mode` attribute should be set to `strict`.
853855
** The appropriate number of nodes should be entered in the `nodeset` and `memnode` attributes. In the first example, there are 4 sockets, therefore the values are `nodeset=0-3` and `cellid` 0 to 3.
854856

855-
The examples below show configurationsnipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
857+
The examples below show configuration snipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
856858

857859
----
858860
<domain type='kvm'>
@@ -1056,7 +1058,11 @@ More details about how to directly assign PCI devices to a guest VM are describe
10561058

10571059
===== Local storage
10581060

1059-
To achieve the best possible performance, it is recommended to directly attach the block device(s) and/or raid controllers, which will be used as storage for the SAP HANA data files. If there is a dedicated raid controller available in the system that only manages devices and raid volumes that will be used in one single VM, the recommendation is to connect it via PCI passthrough as described in the section above. If single devices need to be used (for example NVMe devices), you can connect those to the VM by doing something similar to the below:
1061+
To achieve the best possible performance, it is recommended to directly attach the block device(s) and/or raid controllers, which will be used as storage for the SAP HANA data files.
1062+
If a dedicated RAID controller is available in the system that only manages devices and RAID volumes used in one single VM, the recommendation is to connect it via PCI passthrough.
1063+
This is described in the section above.i
1064+
If single devices need to be used (for example NVMe devices), you can connect those to the VM by doing something similar to the below:
1065+
10601066
// TODO: Trockencode! Check this before publishing!!!
10611067

10621068
----
@@ -1295,7 +1301,8 @@ This overhead leads to an additional transactional throughput loss. However, it
12951301
** The measured performance deviation for OLAP workload is below 5%.
12961302
** During performance analysis with standard workload, most of the test cases stayed within the defined KPI of 10% performance degradation compared to bare metal.
12971303
However, there are low-level performance tests in the test suite exercising various HANA kernel components that exhibit a performance degradation of more than 10%.
1298-
This also indicates that there are particular scenarios which might not be suited for SAP HANA on SUSE KVM with kvm.nx_huge_pages = AUTO; especially those workloads generating high resource utilization, which must be considered when sizing SAP HANA instance in a SUSE KVM virtual machine.
1304+
This also indicates that certain scenarios may not be suited for SAP HANA on SUSE KVM with kvm.nx_huge_pages = AUTO.
1305+
This is especially true for workloads that generate high resource utilization, which must be considered when sizing the SAP HANA instance in a SUSE KVM virtual machine.
12991306
Thorough tests of configurations for all workload conditions are highly recommended.
13001307

13011308

@@ -1385,7 +1392,7 @@ The XML file below is only an *example* showing the key configurations to assist
13851392
The actual XML configuration must be based on your respective hardware configuration and VM requirements.
13861393
====
13871394

1388-
Points of interest in this example (refer to the detailed sections of the *SUSE Best Practices for SAP HANA on KVM* ({sles4sap} {slesProdVersion}) document at hand for a full explanation):
1395+
Points of interest in this example (refer to the detailed sections of this *SUSE Best Practices for SAP HANA on KVM* [{sles4sap} {slesProdVersion}] guide for a full explanation):
13891396

13901397
* Memory
13911398
** The hypervisor has 4 TiB RAM (or 4096 GiB), of which 3698 GiB have been allocated as 1 GB huge pages and therefore 3698 GiB is the max VM size in this case

0 commit comments

Comments
 (0)