|
| 1 | +--- |
| 2 | +sidebar_position: 2 |
| 3 | +sidebar_label: Calculation of Resource Metrics |
| 4 | +title: "Calculation of Resource Metrics" |
| 5 | +keywords: |
| 6 | + - cluster resource metrics |
| 7 | + - host resource metrics |
| 8 | + - reserved resource |
| 9 | + - calculation |
| 10 | +--- |
| 11 | + |
| 12 | +<head> |
| 13 | + <link rel="canonical" href="https://docs.harvesterhci.io/v1.4/monitoring/calculation-resource-metrics"/> |
| 14 | +</head> |
| 15 | + |
| 16 | +Harvester calculates resource metrics using data that is dynamically collected from the system. Host-level resource metrics are calculated and then aggregated to obtain the cluster-level metrics. |
| 17 | + |
| 18 | +You can view resource-related metrics on the Harvester UI. |
| 19 | + |
| 20 | +- **Hosts** screen: Displays host-level metrics |
| 21 | + |
| 22 | +  |
| 23 | + |
| 24 | +- **Dashboard** screen: Displays cluster-level metrics |
| 25 | + |
| 26 | +  |
| 27 | + |
| 28 | +## CPU and Memory |
| 29 | + |
| 30 | +The following sections describe the data sources and calculation methods for CPU and memory resources. |
| 31 | + |
| 32 | +- Resource capacity: Baseline data |
| 33 | +- Resource usage: Data source for the **Used** field on the **Hosts** screen |
| 34 | +- Resource reservation: Data source for the **Reserved** field on the **Hosts** screen |
| 35 | + |
| 36 | +### Resource Capacity |
| 37 | + |
| 38 | +In Kubernetes, a `Node` object is created for each host. `.status.allocatable.cpu` and `.status.allocatable.memory` represent the available CPU and memory resources of a host. |
| 39 | + |
| 40 | +Example: |
| 41 | + |
| 42 | +``` |
| 43 | +# kubectl get nodes -A -oyaml |
| 44 | +apiVersion: v1 |
| 45 | +items: |
| 46 | +- apiVersion: v1 |
| 47 | + kind: Node |
| 48 | + metadata: |
| 49 | +.. |
| 50 | + management.cattle.io/pod-limits: '{"cpu":"12715m","devices.kubevirt.io/kvm":"1","devices.kubevirt.io/tun":"1","devices.kubevirt.io/vhost-net":"1","memory":"17104951040"}' |
| 51 | + management.cattle.io/pod-requests: '{"cpu":"5657m","devices.kubevirt.io/kvm":"1","devices.kubevirt.io/tun":"1","devices.kubevirt.io/vhost-net":"1","ephemeral-storage":"50M","memory":"9155862208","pods":"78"}' |
| 52 | + node.alpha.kubernetes.io/ttl: "0" |
| 53 | +.. |
| 54 | + name: harv41 |
| 55 | + resourceVersion: "2170215" |
| 56 | + uid: b6f5850a-2fbc-4aef-8fbe-121dfb671b67 |
| 57 | + spec: |
| 58 | + podCIDR: 10.52.0.0/24 |
| 59 | + podCIDRs: |
| 60 | + - 10.52.0.0/24 |
| 61 | + providerID: rke2://harv41 |
| 62 | + status: |
| 63 | + addresses: |
| 64 | + - address: 192.168.122.141 |
| 65 | + type: InternalIP |
| 66 | + - address: harv41 |
| 67 | + type: Hostname |
| 68 | + allocatable: |
| 69 | + cpu: "10" |
| 70 | + devices.kubevirt.io/kvm: 1k |
| 71 | + devices.kubevirt.io/tun: 1k |
| 72 | + devices.kubevirt.io/vhost-net: 1k |
| 73 | + ephemeral-storage: "149527126718" |
| 74 | + hugepages-1Gi: "0" |
| 75 | + hugepages-2Mi: "0" |
| 76 | + memory: 20464216Ki |
| 77 | + pods: "200" |
| 78 | + capacity: |
| 79 | + cpu: "10" |
| 80 | + devices.kubevirt.io/kvm: 1k |
| 81 | + devices.kubevirt.io/tun: 1k |
| 82 | + devices.kubevirt.io/vhost-net: 1k |
| 83 | + ephemeral-storage: 153707984Ki |
| 84 | + hugepages-1Gi: "0" |
| 85 | + hugepages-2Mi: "0" |
| 86 | + memory: 20464216Ki |
| 87 | + pods: "200" |
| 88 | +``` |
| 89 | + |
| 90 | +### Resource Usage |
| 91 | + |
| 92 | +CPU and memory usage data is continuously collected and stored in the `NodeMetrics` object. Harvester reads the data from `usage.cpu` and `usage.memory`. |
| 93 | + |
| 94 | +Example: |
| 95 | + |
| 96 | +``` |
| 97 | +# kubectl get NodeMetrics -A -oyaml |
| 98 | +apiVersion: v1 |
| 99 | +items: |
| 100 | +- apiVersion: metrics.k8s.io/v1beta1 |
| 101 | + kind: NodeMetrics |
| 102 | + metadata: |
| 103 | +... |
| 104 | + name: harv41 |
| 105 | + timestamp: "2024-01-23T12:04:44Z" |
| 106 | + usage: |
| 107 | + cpu: 891736742n |
| 108 | + memory: 9845008Ki |
| 109 | + window: 10.149s |
| 110 | +``` |
| 111 | + |
| 112 | +### Resource Reservation |
| 113 | + |
| 114 | +Harvester dynamically calculates the resource limits and requests of all pods running on a host, and updates the information to the annotations of the `NodeMetrics` object. |
| 115 | + |
| 116 | +Example: |
| 117 | + |
| 118 | +``` |
| 119 | + management.cattle.io/pod-limits: '{"cpu":"12715m",...,"memory":"17104951040"}' |
| 120 | + management.cattle.io/pod-requests: '{"cpu":"5657m",...,"memory":"9155862208"}' |
| 121 | +``` |
| 122 | + |
| 123 | +For more information, see [Requests and Limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) in the Kubernetes documentation. |
| 124 | + |
| 125 | +## Storage |
| 126 | + |
| 127 | +Longhorn, which is the default Container Storage Interface (CSI) driver of Harvester, provides storage management features such as distributed block storage and tiering. |
| 128 | + |
| 129 | +### Reserved Storage in Longhorn |
| 130 | + |
| 131 | +Longhorn allows you to specify the percentage of disk space that is not allocated to the default disk on each new Longhorn node. The default value is "30". For more information, see [Storage Reserved Percentage For Default Disk](https://longhorn.io/docs/1.8.0/references/settings/#storage-reserved-percentage-for-default-disk) in the Longhorn documentation. |
| 132 | + |
| 133 | +Depending on the disk size, you can modify the default value using the [embedded Longhorn UI](../troubleshooting/harvester.md#access-embedded-rancher-and-longhorn-dashboards). |
| 134 | + |
| 135 | +### Data Sources and Calculation |
| 136 | + |
| 137 | +Harvester uses the following data to calculate metrics for storage resources. |
| 138 | + |
| 139 | +- Sum of the `storageMaximum` values of all disks (`status.diskStatus.disk-name`): Total storage capacity |
| 140 | +- Sum of the `storageAvailable` values of all disks (`status.diskStatus.disk-name`): Data source for the **Used** field on the **Hosts** screen |
| 141 | +- Sum of the `storageReserved` values of all disks (`spec.disks`): Data source for the **Reserved** field on the **Hosts** screen |
| 142 | + |
| 143 | +Example: |
| 144 | + |
| 145 | +``` |
| 146 | +# kubectl get nodes.longhorn.io -n longhorn-system -oyaml |
| 147 | +
|
| 148 | +apiVersion: v1 |
| 149 | +items: |
| 150 | +- apiVersion: longhorn.io/v1beta2 |
| 151 | + kind: Node |
| 152 | + metadata: |
| 153 | +.. |
| 154 | + name: harv41 |
| 155 | + namespace: longhorn-system |
| 156 | +.. |
| 157 | + spec: |
| 158 | + allowScheduling: true |
| 159 | + disks: |
| 160 | + default-disk-ef11a18c36b01132: |
| 161 | + allowScheduling: true |
| 162 | + diskType: filesystem |
| 163 | + evictionRequested: false |
| 164 | + path: /var/lib/harvester/defaultdisk |
| 165 | + storageReserved: 24220101427 |
| 166 | + tags: [] |
| 167 | +.. |
| 168 | + status: |
| 169 | +.. |
| 170 | + diskStatus: |
| 171 | + default-disk-ef11a18c36b01132: |
| 172 | +.. |
| 173 | + diskType: filesystem |
| 174 | + diskUUID: d2788933-8817-44c6-b688-dee414cc1f73 |
| 175 | + scheduledReplica: |
| 176 | + pvc-95561210-c39c-4c2e-ac9a-4a9bd72b3100-r-20affeca: 2147483648 |
| 177 | + pvc-9e83b2dc-6a4b-4499-ba70-70dc25b2d9aa-r-4ad05c86: 32212254720 |
| 178 | + pvc-bc25be1e-ca4e-4818-a16d-48353a0f2f96-r-c7b88c60: 3221225472 |
| 179 | + pvc-d9d3e54d-8d67-4740-861e-6373f670f1e4-r-f4c7c338: 2147483648 |
| 180 | + pvc-e954b5fe-bbd7-4d44-9866-6ff6684d5708-r-ba6b87b6: 5368709120 |
| 181 | + storageAvailable: 77699481600 |
| 182 | + storageMaximum: 80733671424 |
| 183 | + storageScheduled: 45097156608 |
| 184 | + region: "" |
| 185 | + snapshotCheckStatus: {} |
| 186 | + zone: "" |
| 187 | +``` |
0 commit comments