You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: adoc/SLES4SAP-HANAonKVM-15SP5.adoc
+16-9Lines changed: 16 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -519,7 +519,7 @@ cpupower -c all info
519
519
Modern processors also attempt to save power when they are idle, by switching to a lower power state.
520
520
Unfortunately, this incurs latency when switching in and out of these states.
521
521
522
-
To avoid that, and to achieve better and more consistent performance, the CPUs should not be allowed to switch those power saving modes (known as *C-states*) and should stay in normal operation mode all the time.
522
+
To avoid that, and to achieve better and more consistent performance, the CPUs should not be allowed to switch those power saving modes (known as *C-states*). This means it should stay in normal operation mode all the time.
523
523
Therefore, it is recommended to only use the state *C0*.
524
524
525
525
This can be enforced by adding the following parameters to the kernel boot command line: `intel_idle.max_cstate=0`.
@@ -712,7 +712,8 @@ This means that, in total, there needs to be the following number of huge pages:
712
712
713
713
Such number must be passed to the host kernel command line parameter on boot (that is `hugepages=3758`, see <<_sec_technical_explanation_of_the_above_described_configuration_settings>>).
714
714
715
-
Both the total amount of memory the guest VM should use and the fact that such memory must come from 1 GiB huge pages need to be specified in the guest VM configuration file. This means that the total available memory is the total of all configured 1GiB sized hugepages on the host (in KiB).
715
+
The guest VM configuration file must specify both the total memory the VM will use and that the memory must come from 1 GiB huge pages.
716
+
This means that the total available memory is the total of all configured 1GiB sized huge pages on the host (in KiB).
716
717
717
718
You must also ensure that the `memory` and the `currentMemory` element have the same value. This is to disable memory ballooning, which, if enabled, would cause unacceptable latency:
718
719
@@ -734,7 +735,7 @@ You must also ensure that the `memory` and the `currentMemory` element have the
734
735
.Memory Unit
735
736
[NOTE]
736
737
====
737
-
The memory unit can be set to GiB to ease the memory computations.
738
+
The memory unit can be set to GiB to simplify memory calculations.
738
739
====
739
740
740
741
[[_sec_vcpu_and_vnuma_topology]]
@@ -755,7 +756,7 @@ Also refer to <<_sec_memory_backing>> and <<_sec_memory_sizing>> of the document
755
756
** each NUMA cell of the guest VM has 56 vCPUs.
756
757
** the distances between the cells are identical to those of the physical hardware (as per the output of the command `numactl --hardware`).
757
758
758
-
The examples below show configurationsnipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
759
+
The examples below show configuration snipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
759
760
760
761
----
761
762
<domain type='kvm'>
@@ -839,7 +840,8 @@ For example, assuming that the first hyperthread sibling pair is CPU 0 and CPU 1
839
840
840
841
It is recommended to pin both the various sibling pairs of vCPUs to (the corresponding) sibling pairs of host CPUs.
841
842
For example, vCPU 0 should be pinned to pCPU 0 and 112, and the same applies to vCPU 1.
842
-
As far as both the vCPUs always run on the same physical core, the host scheduler is allowed to execute them on either thread, for example in case only one is free while the other is busy executing host or hypervisor activities.
843
+
As long as both vCPUs always run on the same physical core, the host scheduler can execute them on either thread, for instance,
844
+
if one is free while the other is busy with host or hypervisor activities.
843
845
844
846
Using the above information, the CPU and memory pinning section of the guest VM XML can be created.
845
847
Below find a practical example based on the hypothetical example above.
@@ -852,7 +854,7 @@ Make sure to take note of the following configuration components:
852
854
** The `mode` attribute should be set to `strict`.
853
855
** The appropriate number of nodes should be entered in the `nodeset` and `memnode` attributes. In the first example, there are 4 sockets, therefore the values are `nodeset=0-3` and `cellid` 0 to 3.
854
856
855
-
The examples below show configurationsnipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
857
+
The examples below show configuration snipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
856
858
857
859
----
858
860
<domain type='kvm'>
@@ -1056,7 +1058,11 @@ More details about how to directly assign PCI devices to a guest VM are describe
1056
1058
1057
1059
===== Local storage
1058
1060
1059
-
To achieve the best possible performance, it is recommended to directly attach the block device(s) and/or raid controllers, which will be used as storage for the SAP HANA data files. If there is a dedicated raid controller available in the system that only manages devices and raid volumes that will be used in one single VM, the recommendation is to connect it via PCI passthrough as described in the section above. If single devices need to be used (for example NVMe devices), you can connect those to the VM by doing something similar to the below:
1061
+
To achieve the best possible performance, it is recommended to directly attach the block device(s) and/or raid controllers, which will be used as storage for the SAP HANA data files.
1062
+
If a dedicated RAID controller is available in the system that only manages devices and RAID volumes used in one single VM, the recommendation is to connect it via PCI passthrough.
1063
+
This is described in the section above.i
1064
+
If single devices need to be used (for example NVMe devices), you can connect those to the VM by doing something similar to the below:
1065
+
1060
1066
// TODO: Trockencode! Check this before publishing!!!
1061
1067
1062
1068
----
@@ -1295,7 +1301,8 @@ This overhead leads to an additional transactional throughput loss. However, it
1295
1301
** The measured performance deviation for OLAP workload is below 5%.
1296
1302
** During performance analysis with standard workload, most of the test cases stayed within the defined KPI of 10% performance degradation compared to bare metal.
1297
1303
However, there are low-level performance tests in the test suite exercising various HANA kernel components that exhibit a performance degradation of more than 10%.
1298
-
This also indicates that there are particular scenarios which might not be suited for SAP HANA on SUSE KVM with kvm.nx_huge_pages = AUTO; especially those workloads generating high resource utilization, which must be considered when sizing SAP HANA instance in a SUSE KVM virtual machine.
1304
+
This also indicates that certain scenarios may not be suited for SAP HANA on SUSE KVM with kvm.nx_huge_pages = AUTO.
1305
+
This is especially true for workloads that generate high resource utilization, which must be considered when sizing the SAP HANA instance in a SUSE KVM virtual machine.
1299
1306
Thorough tests of configurations for all workload conditions are highly recommended.
1300
1307
1301
1308
@@ -1385,7 +1392,7 @@ The XML file below is only an *example* showing the key configurations to assist
1385
1392
The actual XML configuration must be based on your respective hardware configuration and VM requirements.
1386
1393
====
1387
1394
1388
-
Points of interest in this example (refer to the detailed sections of the *SUSE Best Practices for SAP HANA on KVM* ({sles4sap} {slesProdVersion}) document at hand for a full explanation):
1395
+
Points of interest in this example (refer to the detailed sections of this *SUSE Best Practices for SAP HANA on KVM* [{sles4sap} {slesProdVersion}] guide for a full explanation):
1389
1396
1390
1397
* Memory
1391
1398
** The hypervisor has 4 TiB RAM (or 4096 GiB), of which 3698 GiB have been allocated as 1 GB huge pages and therefore 3698 GiB is the max VM size in this case
0 commit comments