Skip to content

Commit faa5f2c

Browse files
ui: update documentation about scheduling policy
- inconsistency between 2 and 5 minutes overload for cpu resolved to correct value of 2. - updated information about edit and new schedule policy settings menu. - added lock icon for policies that are mandatory since engine 4.4.0 with additional info. Signed-off-by: Jasper Berton <jasper.berton@team.blue>
1 parent ea4a991 commit faa5f2c

File tree

5 files changed

+62
-87
lines changed

5 files changed

+62
-87
lines changed

source/documentation/administration_guide/chap-Global_Configuration.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ include::common/admin/proc-Setting_Legacy_SPICE_Cipher.adoc[leveloffset=+3]
7878

7979
A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The filter modules apply hard enforcement and filter out hosts that do not meet the conditions specified by that filter. The weights modules apply soft enforcement, and are used to control the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
8080

81-
The {virt-product-fullname} {engine-name} provides five default scheduling policies: *Evenly_Distributed*, *Cluster_Maintenance*, *None*, *Power_Saving*, and *VM_Evenly_Distributed*. You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See link:{URL_virt_product_docs}{URL_format}administration_guide/index#sect-Scheduling_Policies[Scheduling Policies] in the _Administration Guide_ for more information about the properties of each scheduling policy.
81+
The {virt-product-fullname} {engine-name} provides five default scheduling policies: *Evenly_Distributed*, *Cluster_Maintenance*, *None*, *Power_Saving*, and *VM_Evenly_Distributed*. You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 2 minutes, but these values can be changed using scheduling policies. See link:{URL_virt_product_docs}{URL_format}administration_guide/index#sect-Scheduling_Policies[Scheduling Policies] in the _Administration Guide_ for more information about the properties of each scheduling policy.
8282

8383
ifdef::rhv-doc[]
8484
For detailed information about how scheduling policies work, see link:https://access.redhat.com/solutions/17604[How does cluster scheduling policy work?].

source/documentation/administration_guide/topics/Cluster_Scheduling_Policy_Settings.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
[id="Cluster_Scheduling_Policy_Settings"]
33
= Scheduling Policy Settings Explained
44

5-
Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See link:{URL_virt_product_docs}{URL_format}administration_guide/index#sect-Scheduling_Policies[Scheduling Policies] in the _Administration Guide_ for more information.
5+
Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 2 minutes, but these values can be changed using scheduling policies. See link:{URL_virt_product_docs}{URL_format}administration_guide/index#sect-Scheduling_Policies[Scheduling Policies] in the _Administration Guide_ for more information.
66

77
[id="Cluster-General"]
88
.Scheduling Policy Tab Properties
Lines changed: 58 additions & 83 deletions
Original file line numberDiff line numberDiff line change
@@ -1,96 +1,71 @@
11
:_content-type: PROCEDURE
22
[id="Explanation_of_Settings_in_the_New_Scheduling_Policy_and_Edit_Scheduling_Policy_Window"]
3+
:mandatory-mark: icon:lock[title="Mandatory",role=mandatory]
34
= Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window
45

5-
The following table details the options available in the *New Scheduling Policy* and *Edit Scheduling Policy* windows.
6+
The following details the different options available in the *New Scheduling Policy* and *Edit Scheduling Policy* windows.
67

7-
.New Scheduling Policy and Edit Scheduling Policy Settings
8-
[options="header"]
9-
|===
10-
|Field Name |Description
11-
|*Name* |The name of the scheduling policy. This is the name used to refer to the scheduling policy in the {virt-product-fullname} {engine-name}.
12-
|*Description* |A description of the scheduling policy. This field is recommended but not mandatory.
13-
|*Filter Modules* a|A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below:
14-
15-
* `ClusterInMaintenance`: Virtual machines being started on the host that are not configured for high availability filter out the host.
16-
17-
* `CpuPinning`: Hosts which do not satisfy the CPU pinning definition.
18-
19-
* `Migration`: Prevents migration to the same host.
20-
21-
* `CPUOverloaded`: Hosts with CPU usage that is above the defined *HighUtilization* threshold for the interval defined by the *CpuOverCommitDurationMinutes*.
22-
23-
* `PinToHost`: Hosts other than the host to which the virtual machine is pinned.
24-
25-
* `CPU-Level`: Hosts that do not meet the CPU topology of the virtual machine.
26-
27-
* `VmAffinityGroups`: Hosts that do not meet the affinity rules defined for the virtual machine.
28-
29-
* `NUMA`: Hosts that do not have NUMA nodes that can accommodate the virtual machine vNUMA nodes in terms of resources.
30-
31-
* `InClusterUpgrade`: Hosts that are running an earlier version of the operating system than the host that the virtual machine currently runs on.
32-
33-
* `MDevice`: Hosts that do not provide the required mediated device (mDev).
34-
35-
* `Memory`: Hosts that do not have sufficient memory to run the virtual machine.
36-
37-
* `CPU`: Hosts with fewer CPUs than the number assigned to the virtual machine.
38-
39-
* `HostedEnginesSpares`: Reserves space for the {engine-name} virtual machine on a specified number of self-hosted engine nodes.
40-
41-
* `Swap`: Hosts that are not swapping within the threshold.
42-
43-
* `VM leases ready`: Hosts that do not support virtual machines configured with storage leases.
44-
45-
* `VmToHostsAffinityGroups`: Group of hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on one of the hosts in a group or on a separate host that is excluded from the group.
46-
47-
* `HostDevice`: Hosts that do not support host devices required by the virtual machine.
48-
49-
* `HA`: Forces the {engine-name} virtual machine in a self-hosted engine environment to run only on hosts with a positive high availability score.
50-
51-
* `Emulated-Machine`: Hosts which do not have proper emulated machine support.
8+
*Name:* The name of the scheduling policy. This is the name used to refer to the scheduling policy in the {virt-product-fullname} {engine-name}.
529

53-
* `HugePages`: Hosts that do not meet the required number of Huge Pages needed for the virtual machine's memory.
10+
*Description:* A description of the scheduling policy. This field is recommended but not mandatory.
5411

55-
* `Migration-Tsc-Frequency`: Hosts that do not have virtual machines with the same TSC frequency as the host currently running the virtual machine.
12+
*Filter Modules:* A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below:
5613

57-
* `Network`: Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed.
14+
[NOTE]
15+
Filter Modules marked with {mandatory-mark} represent units that are mandatory since 4.4.0. From that version onward these are always enabled and will not be displayed in the UI.
5816

59-
* `Label`: Hosts that do not have the required affinity labels.
60-
61-
* `Compatibility-Version`: Hosts that do not have the correct cluster compatibility version support.
62-
63-
|*Weights Modules* a|A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
64-
65-
* `VmAffinityGroups`: Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group.
66-
67-
* `InClusterUpgrade`: Weight hosts in accordance with their operating system version. The weight penalizes hosts with earlier operating systems more than hosts with the same operating system as the host that the virtual machine is currently running on. This ensures that priority is always given to hosts with later operating systems.
68-
69-
* `OptimalForCpuEvenDistribution`: Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage.
70-
71-
* `CPU for high performance VMs`: Prefers hosts that have more or an equal number of sockets, cores and threads than the VM.
72-
73-
* `HA`: Weights hosts in accordance with their high availability score.
74-
75-
* `OptimalForCpuPowerSaving`: Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage.
76-
77-
* `OptimalForMemoryPowerSaving`: Weights hosts in accordance with their memory usage, giving priority to hosts with lower available memory.
78-
79-
* `CPU and NUMA pinning compatibility`: Weights hosts in accordance to pinning compatibility. When a virtual machine has both vNUMA and pinning defined, this weight module gives preference to hosts whose CPU pinning does not clash with the vNUMA pinning.
80-
81-
* `VmToHostsAffinityGroups`: Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on one of the hosts in a group or on a separate host that is excluded from the group.
82-
83-
* `OptimalForEvenGuestDistribution`: Weights hosts in accordance with the number of virtual machines running on those hosts.
84-
85-
* `OptimalForHaReservation`: Weights hosts in accordance with their high availability score.
17+
.Different Possible Filter Modules
18+
[options="header"]
19+
|===
20+
|Filter Name |Description
21+
| `ClusterInMaintenance` | Virtual machines being started on the host that are not configured for high availability filter out the host
22+
| `CpuPinning` {mandatory-mark} | Hosts which do not satisfy the CPU pinning definition.
23+
| `Migration` | Prevents migration to the same host.
24+
| `CPUOverloaded` | Hosts with CPU usage that is above the defined *HighUtilization* threshold for the interval defined by the *CpuOverCommitDurationMinutes*.
25+
| `PinToHost` | Hosts other than the host to which the virtual machine is pinned.
26+
| `CPU-Level` | Hosts that do not meet the CPU topology of the virtual machine.
27+
| `VmAffinityGroups` | Hosts that do not meet the affinity rules defined for the virtual machine.
28+
| `NUMA` | Hosts that do not have NUMA nodes that can accommodate the virtual machine vNUMA nodes in terms of resources.
29+
| `InClusterUpgrade` | Hosts that are running an earlier version of the operating system than the host that the virtual machine currently runs on.
30+
| `MDevice` {mandatory-mark} | Hosts that do not provide the required mediated device (mDev).
31+
| `Memory` | Hosts that do not have sufficient memory to run the virtual machine.
32+
| `CPU` | Hosts with fewer CPUs than the number assigned to the virtual machine.
33+
| `HostedEnginesSpares` | Reserves space for the {engine-name} virtual machine on a specified number of self-hosted engine nodes.
34+
| `Swap` | Hosts that are not swapping within the threshold.
35+
| `VM leases ready` {mandatory-mark} | Hosts that do not support virtual machines configured with storage leases.
36+
| `VmToHostsAffinityGroups` | Group of hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on one of the hosts in a group or on a separate host that is excluded from the group.
37+
| `HostDevice` | Hosts that do not support host devices required by the virtual machine.
38+
| `HA` | Forces the {engine-name} virtual machine in a self-hosted engine environment to run only on hosts with a positive high availability score.
39+
| `Emulated-Machine` | Hosts which do not have proper emulated machine support.
40+
| `HugePages` | Hosts that do not meet the required number of Huge Pages needed for the virtual machine's memory.
41+
| `Migration-Tsc-Frequency` | Hosts that do not have virtual machines with the same TSC frequency as the host currently running the virtual machine.
42+
| `Network` | Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed.
43+
| `Compatibility-Version` {mandatory-mark} | Hosts that do not have the correct cluster compatibility version support.
44+
| `Host-hooks` | Runs VMs only on hosts with a hooks required by VM's configuration
45+
|===
8646

87-
* `OptimalForMemoryEvenDistribution`: Weights hosts in accordance with their memory usage, giving priority to hosts with higher available memory.
88-
//* `None`: Weights hosts in accordance with the even distribution module.
47+
*Weight Modules:* A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
8948

90-
* `Fit VM to single host NUMA node`: Weights hosts in accordance to whether a virtual machine fits into a single NUMA node. When a virtual machine does not have vNUMA defined, this weight module gives preference to hosts that can fit the virtual machine into a single physical NUMA.
49+
.Different Possible Weights Modules
50+
[options="header"]
51+
|===
52+
|Weight Name |Description
53+
| `VmAffinityGroups` | Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group.
54+
| `InClusterUpgrade` | Weight hosts in accordance with their operating system version. The weight penalizes hosts with earlier operating systems more than hosts with the same operating system as the host that the virtual machine is currently running on. This ensures that priority is always given to hosts with later operating systems.
55+
| `OptimalForCpuEvenDistribution` | Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage.
56+
| `CPU for high performance VMs` | Prefers hosts that have more or an equal number of sockets, cores and threads than the VM.
57+
| `HA` | Weights hosts in accordance with their high availability score.
58+
| `OptimalForCpuPowerSaving` | Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage.
59+
| `CPU and NUMA pinning compatibility` | Weights hosts in accordance to pinning compatibility. When a virtual machine has both vNUMA and pinning defined, this weight module gives preference to hosts whose CPU pinning does not clash with the vNUMA pinning.
60+
| `VmToHostsAffinityGroups` | Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on one of the hosts in a group or on a separate host that is excluded from the group.
61+
| `OptimalForEvenGuestDistribution` | Weights hosts in accordance with the number of virtual machines running on those hosts.
62+
| `OptimalForHaReservation` | Weights hosts in accordance with their high availability score.
63+
| `OptimalForMemoryEvenDistribution` | Weights hosts in accordance with their memory usage, giving priority to hosts with higher available memory.
64+
| `Fit VM to single host NUMA node` | Weights hosts in accordance to whether a virtual machine fits into a single NUMA node. When a virtual machine does not have vNUMA defined, this weight module gives preference to hosts that can fit the virtual machine into a single physical NUMA.
65+
| `PreferredHosts` | Preferred hosts have priority during virtual machine setup.
66+
| `OptimalForMemoryPowerSaving` | Weights hosts in accordance with their memory usage, giving priority to hosts with higher memory usage.
67+
|===
9168

92-
* `PreferredHosts`: Preferred hosts have priority during virtual machine setup.
69+
*Load Balancer:* This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage.
9370

94-
|*Load Balancer* |This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage.
95-
|*Properties* |This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the *+* and *-* buttons to add or remove additional properties to or from the load balancing module.
96-
|===
71+
*Properties:* This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the *+* and *-* buttons to add or remove additional properties to or from the load balancing module.

0 commit comments

Comments
 (0)