Skip to content

[prometheus-kube-stack] Wrong PVC Storage Class #5128

Open
@barthofu

Description

@barthofu

Describe the bug a clear and concise description of what the bug is.

In my values.yaml, i've set the value of prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.storageClassName to longhorn-static (which is an existing storage class in my cluster), but the generated PVC by the prometheus-kube-prometheus-stack StatefulSet has longhorn as its storage class.

Here are all the storage classes present on my cluster:

NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  38d
nfs-csi                nfs.csi.k8s.io          Delete          Immediate              false                  38d
longhorn-static        driver.longhorn.io      Delete          Immediate              true                   38d
longhorn (default)     driver.longhorn.io      Delete          Immediate              true                   38d

What's your helm version?

v3.16.3

What's your kubectl version?

Client Version: v1.31.2 Kustomize Version: v5.4.2 Server Version: v1.29.3+k3s1

Which chart?

kube-prometheus-stack

What's the chart version?

67.8.0

What happened?

No response

What you expected to happen?

I expect the generated PVC to have the storageClassName I provided in the values.yaml

How to reproduce it?

No response

Enter the changed values of values.yaml?

kube-prometheus-stack:
  crds:
    enabled: true

  cleanPrometheusOperatorObjectNames: true

  ###
  ### Component values
  ###
  alertmanager:
    enabled: false

  kubeApiServer:
    enabled: true
    serviceMonitor:
      metricRelabelings:
        # Drop high cardinality labels
        - action: drop
          sourceLabels: ["__name__"]
          regex: (apiserver|etcd|rest_client)_request(|_sli|_slo)_duration_seconds_bucket
        - action: drop
          sourceLabels: ["__name__"]
          regex: (apiserver_response_sizes_bucket|apiserver_watch_events_sizes_bucket)

  kubeControllerManager:
    enabled: false

  kubeEtcd:
    enabled: false

  kubelet:
    enabled: true
    serviceMonitor:
      metricRelabelings:
        # Drop high cardinality labels
        - action: labeldrop
          regex: (uid)
        - action: labeldrop
          regex: (id|name)
        - action: drop
          sourceLabels: ["__name__"]
          regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)

  kubeProxy:
    enabled: false

  kubeScheduler:
    enabled: false

  kubeStateMetrics:
    enabled: false

  nodeExporter:
    enabled: false

  grafana:
    enabled: false
    forceDeployDashboards: true
    sidecar:
      dashboards:
        annotations:
          grafana_folder: Kubernetes

  ###
  ### Prometheus operator values
  ###
  prometheusOperator:
    resources:
      requests:
        cpu: 35m
        memory: 273M
      limits:
        memory: 326M

    prometheusConfigReloader:
      # resource config for prometheusConfigReloader
      resources:
        requests:
          cpu: 5m
          memory: 32M
        limits:
          memory: 32M

  ###
  ### Prometheus instance values
  ###
  prometheus:
    ingress:
      enabled: true
      ingressClassName: traefik
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-issuer
      hosts:
        - &host prometheus.home.bartho.dev
      pathType: Prefix
      tls:
        - secretName: kube-prometheus-stack-tls
          hosts:
            - *host

    prometheusSpec:
      ruleSelectorNilUsesHelmValues: false
      serviceMonitorSelectorNilUsesHelmValues: false
      podMonitorSelectorNilUsesHelmValues: false
      probeSelectorNilUsesHelmValues: false
      scrapeConfigSelectorNilUsesHelmValues: false
      enableAdminAPI: true
      walCompression: true
      retentionSize: 8GB
      # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/platform/storage.md#manual-storage-provisioning
      storageSpec:
        volumeClaimTemplate:
          spec:
            storageClassName: longhorn-static
            selector:
              matchLabels:
                app.kubernetes.io/name: kube-prometheus-stack-prometheus-storage
            resources:
              requests:
                storage: 10Gi

Enter the command that you execute and failing/misfunctioning.

N/A (deployed using ArgoCD)

Anything else we need to know?

I'm using k3s, ArgoCD and Longhorn in my cluster

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions