Skip to content

When use kubectl describe hpa get hpa metrics incorrect #1730

Open
@Zdekeipa

Description

@Zdekeipa

What happened:
When use kubectl describe hpa get hpa metrics incorrect. There is an extra m in the unit of resource memory on pods, and the value should be divided by 1000 to be correct.

How to reproduce it (as minimally and precisely as possible):

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/prod-app/pods/xxl-7cbf685b5b-7s7pb

{"kind":"PodMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"xxl-7cbf685b5b-7s7pb","namespace":"prod-app","creationTimestamp":"2025-03-27T06:20:17Z","labels":{"admission.datadoghq.com/enabled":"true","app.kubernetes.io/instance":"xxl","app.kubernetes.io/name":"twwin","pod-template-hash":"7cbf685b5b","tags.datadoghq.com/service":"xxl"}},"timestamp":"2025-03-27T06:20:07Z","window":"17.104s","containers":[{"name":"xxl","usage":{"cpu":"1010123648n","memory":"1852124Ki"}}]}

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/prod-app/pods/xxl-7cbf685b5b-962gl

{"kind":"PodMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"xxl-7cbf685b5b-962gl","namespace":"prod-app","creationTimestamp":"2025-03-27T06:20:18Z","labels":{"admission.datadoghq.com/enabled":"true","app.kubernetes.io/instance":"xxl","app.kubernetes.io/name":"twwin","pod-template-hash":"7cbf685b5b","tags.datadoghq.com/service":"xxl"}},"timestamp":"2025-03-27T06:20:04Z","window":"16.021s","containers":[{"name":"xxl","usage":{"cpu":"887341813n","memory":"1868996Ki"}}]}

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/prod-app/pods/xxl-7cbf685b5b-jsw4l

{"kind":"PodMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"xxl-7cbf685b5b-jsw4l","namespace":"prod-app","creationTimestamp":"2025-03-27T06:20:18Z","labels":{"admission.datadoghq.com/enabled":"true","app.kubernetes.io/instance":"xxl","app.kubernetes.io/name":"twwin","pod-template-hash":"7cbf685b5b","tags.datadoghq.com/service":"xxl"}},"timestamp":"2025-03-27T06:20:08Z","window":"14.046s","containers":[{"name":"xxl","usage":{"cpu":"931186875n","memory":"1839636Ki"}}]}

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/prod-app/pods/xxl-7cbf685b5b-qsb7t

{"kind":"PodMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"xxl-7cbf685b5b-qsb7t","namespace":"prod-app","creationTimestamp":"2025-03-27T06:20:19Z","labels":{"admission.datadoghq.com/enabled":"true","app.kubernetes.io/instance":"xxl","app.kubernetes.io/name":"twwin","pod-template-hash":"7cbf685b5b","tags.datadoghq.com/service":"xxl"}},"timestamp":"2025-03-27T06:20:13Z","window":"18.11s","containers":[{"name":"xxl","usage":{"cpu":"642321645n","memory":"1842416Ki"}}]}

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/prod-app/pods/xxl-7cbf685b5b-qww4r

{"kind":"PodMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"xxl-7cbf685b5b-qww4r","namespace":"prod-app","creationTimestamp":"2025-03-27T06:20:19Z","labels":{"admission.datadoghq.com/enabled":"true","app.kubernetes.io/instance":"xxl","app.kubernetes.io/name":"twwin","pod-template-hash":"7cbf685b5b","tags.datadoghq.com/service":"xxl"}},"timestamp":"2025-03-27T06:20:07Z","window":"18.829s","containers":[{"name":"xxl","usage":{"cpu":"572363174n","memory":"1857472Ki"}}]}

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/prod-app/pods/xxl-7cbf685b5b-s82rz

{"kind":"PodMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"xxl-7cbf685b5b-s82rz","namespace":"prod-app","creationTimestamp":"2025-03-27T06:20:20Z","labels":{"admission.datadoghq.com/enabled":"true","app.kubernetes.io/instance":"xxl","app.kubernetes.io/name":"twwin","pod-template-hash":"7cbf685b5b","tags.datadoghq.com/service":"xxl"}},"timestamp":"2025-03-27T06:20:02Z","window":"16.855s","containers":[{"name":"xxl","usage":{"cpu":"918341005n","memory":"1846032Ki"}}]}


kubectl describe hpa  xxl -n prod-app
Name:                       xxl
Namespace:                  prod-app
Labels:                     <none>
Annotations:                <none>
CreationTimestamp:          Wed, 26 Mar 2025 15:08:34 +0800
Reference:                  Deployment/xxl
Metrics:                    ( current / target )
  resource cpu on pods:     827m / 1600m
  resource memory on pods:  1895539370666m / 3277M ### this issue:1895539370666m
Min replicas:               6
Max replicas:               8
Behavior:
  Scale Up:
    Stabilization Window: 60 seconds
    Select Policy: Max
    Policies:
      - Type: Percent  Value: 50  Period: 60 seconds
      - Type: Pods     Value: 1   Period: 60 seconds
  Scale Down:
    Stabilization Window: 300 seconds
    Select Policy: Min
    Policies:
      - Type: Percent  Value: 25  Period: 60 seconds
      - Type: Pods     Value: 1   Period: 60 seconds
Deployment pods:       6 current / 6 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  recommended size matches current size
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from cpu resource
  ScalingLimited  True    TooFewReplicas    the desired replica count is less than the minimum replica count
Events:           <none>

Environment

macos 14.6 (23G80)

kubectl version
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.31.6-eks-bc803b4

Kubernetes verion
Kubernetes gitVersion: v1.31.4-eks-2d5f260
Kubernetes buildDate: 2024-12-13 04:56:32
Kubernetes platform: linux/amd64

Same as this situation #1250

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions