Skip to content

Grafana data source query builder fails to list fields #478

@7onn

Description

@7onn

Describe the bug

I see the feature is still in beta, but maybe I'm just doing something wrong.

After clearing most log fields via Opentelemetry Collector, the query builder fails to list available fields to filter.

Image

These are the fields I have.

1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"stream"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"level"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"_stream_id"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"5010","name":"@module"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"_stream"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"instance"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"log.file.path"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"container_name"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"namespace"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"5010","name":"@message"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"logtag"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"severity"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"log"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"5010","name":"@timestamp"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"5010","name":"@level"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"pod_name"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"msg"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"_time"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"message"}
1764701658393	2025-12-02T18:54:18.393Z	{"hits":"119224","name":"_msg"}

To Reproduce

Install Victorialogs similarly to this.

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
  # Auth
  - name: victoria-metrics-auth
    namespace: victoriametrics
    releaseName: vmmauth
    repo: https://victoriametrics.github.io/helm-charts/
    version: 0.19.1
    valuesFile: ./vmauth-values.yaml
    additionalValuesFiles:
      - vmauth-values.yaml

  # Logs
  - name: victoria-logs-cluster
    namespace: victoriametrics
    releaseName: vmlogs
    repo: https://victoriametrics.github.io/helm-charts/
    version: 0.0.18
    valuesFile: vmlogs-values.yaml
# vmauth-values.yaml
replicaCount: 2
image:
  tag: "v1.123.0"
fullnameOverride: "vmauth"
deployment:
  spec:
    strategy:
      type: RollingUpdate
podLabels:
  app.kubernetes.io/owner: platform
podDisruptionBudget:
  enabled: true
  minAvailable: 1
podSecurityContext:
  enabled: true
  fsGroup: 65534
  seccompProfile:
    type: RuntimeDefault
securityContext:
  enabled: true
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 65534

ingress:
  enabled: true
  annotations:
    external-dns.alpha.kubernetes.io/cloudflare-proxied: "true"
  ingressClassName: nginx
  hosts:
  - name: victoriametrics.example.com
    path:
      - /
resources: {}
serviceMonitor:
  enabled: true
env:
  - name: VM_TOKEN
    name: somethingsaferthanthis
config:
  users:
    - username: logs
      password: "%{VM_TOKEN}"
      url_prefix: "http://vmlogs-select:9471/"
# vmlogs-values.yaml
common:
  image:
    tag: "v1.38.0"

vlselect:
  fullnameOverride: "vmlogs-select"
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  horizontalPodAutoscaler:
    enabled: true
    maxReplicas: 4
    minReplicas: 2
  podDisruptionBudget:
    enabled: true
    minAvailable: 1
  resources: {}
  podSecurityContext:
    enabled: true
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  securityContext:
    enabled: true
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    readOnlyRootFilesystem: true

vlinsert:
  fullnameOverride: "vmlogs-insert"
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  horizontalPodAutoscaler:
    enabled: true
    maxReplicas: 4
    minReplicas: 2
  podDisruptionBudget:
    enabled: true
    minAvailable: 1
  resources: {}
  podSecurityContext:
    enabled: true
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  securityContext:
    enabled: true
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    readOnlyRootFilesystem: true

vlstorage:
  fullnameOverride: "vmlogs-storage"
  retentionPeriod: 1y
  replicaCount: 3
  podDisruptionBudget:
    enabled: true
    minAvailable: 1
  resources: {}
  persistentVolume:
    storageClassName: gp3
    size: 50Gi
  podSecurityContext:
    enabled: true
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  securityContext:
    enabled: true
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
    readOnlyRootFilesystem: true

vmauth:
  enabled: false

Install it

kubectl kustomize . --load-restrictor=LoadRestrictionsNone --enable-helm | kubectl apply -f -

Then send logs via Open telemetry collector configured as:

---
  apiVersion: opentelemetry.io/v1beta1
  kind: OpenTelemetryCollector
  metadata:
    name: otel
    namespace: open-telemetry
  spec:
    mode: daemonset
    image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.140.1
    resources: {}
    observability:
      metrics:
        enableMetrics: true
    env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: K8S_NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName
  
    targetAllocator:
      enabled: true
      image: ghcr.io/open-telemetry/opentelemetry-operator/target-allocator:0.138.0
      resources: {}
      allocationStrategy: per-node
      prometheusCR:
        enabled: true
        scrapeInterval: 30s
        serviceMonitorSelector: {}
  
    volumes:
      - name: varlogpods
        hostPath:
          path: /var/log/pods
    volumeMounts:
      - name: varlogpods
        mountPath: /var/log/pods
    config:
      extensions:
        health_check:
          endpoint: ${env:POD_IP}:13133
  
      # https://opentelemetry.io/docs/collector/components/receiver/
      receivers:
        otlp:
          protocols:
            grpc:
              endpoint: 0.0.0.0:43177
            http:
              endpoint: 0.0.0.0:43188
        filelog:
          include:
          - /var/log/pods/*/*/*.log
          include_file_name: false
          include_file_path: true
          retry_on_failure:
            enabled: true
          start_at: beginning
          operators:
          - id: parser-containerd
            type: regex_parser 
            regex: ^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
            timestamp:
              layout: '%Y-%m-%dT%H:%M:%S.%LZ'
              parse_from: attributes.time
  
          - id: parser-pod-info
            parse_from: attributes["log.file.path"]
            regex: ^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]+)\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log$
            type: regex_parser
  
          - type: recombine
            is_last_entry: attributes.logtag == 'F'
            combine_field: attributes.log
            combine_with: ""
            max_batch_size: 1000
            max_log_size: 1048576
            output: handle_empty_log
            source_identifier: attributes["log.file.path"]
          - field: attributes.log
            id: handle_empty_log
            if: attributes.log == nil
            type: add
            value: ""
  
          - type: json_parser
            parse_from: attributes.log
            if: attributes.log matches "^\\{"
  
          - type: add
            field: attributes.instance
            value: ${env:K8S_NODE_NAME}
  
          - id: export
            type: noop
    
      # https://opentelemetry.io/docs/collector/components/processor/
      processors:
        memory_limiter:
          check_interval: 1s
          limit_percentage: 75
          spike_limit_percentage: 15
        batch:
          send_batch_max_size: 2048
          send_batch_size: 1024
          timeout: 1s
        transform/logs:
          error_mode: ignore
          log_statements:
            - statements:
              - set(log.attributes["namespace"], resource.attributes["namespace"])
              - keep_matching_keys(log.attributes, "^(_.*|@.*|filename|log|service|job|agent|k8s\\.|container_name|instance|level|msg|message|namespace|pod_name|severity|severity_text|stream)")
              - delete_matching_keys(log.attributes, "^(jobName|logger|loggerName|loggerClassName)$")
            - conditions: IsMap(log.body)
              statements:
                - keep_matching_keys(log.body, "^(level|msg|message|namespace|severity|severity_text)$")

      # https://opentelemetry.io/docs/collector/components/exporter/
      exporters:
        debug: {}
        otlphttp/victoriametrics:
          compression: gzip
          encoding: proto
          logs_endpoint: http://vmlogs-insert.victoriametrics:9481/insert/opentelemetry/v1/logs
          tls:
            insecure: true
  
      # https://opentelemetry.io/docs/collector/configuration/#service
      service:
        telemetry:
          logs:
            encoding: json
            level: info
  
        extensions:
          - health_check
  
        # https://opentelemetry.io/docs/collector/configuration/#pipelines
        pipelines:
          logs:
            receivers: [filelog, otlp]
            processors:
              - memory_limiter
              - transform/logs
              - batch
            exporters: [otlphttp/victoriametrics]
          metrics:
            receivers: [otlp]
            processors: [memory_limiter, batch]
            exporters: [debug]
          traces:
            receivers: [otlp]
            processors: [memory_limiter, batch]
            exporters: [debug]

Versions of VictoriaLogs datasource and VictoriaLogs backend

Grafana v12.3.0 (20051fb1fc)
Datasource https://github.com/VictoriaMetrics/victorialogs-datasource/releases/tag/v0.22.3
Victorialogs in v1.38.0

Link to dashboard in Victoria Metrics

No response

Please provide dashboard JSON if it is possible

No dashboard yet, I'm just using https://grafana.example.com/explore

Additional information

The error seemed intermitent before; It was working while loading a lot of fields but I consider that would hinder performance and storage. So I started to sanitize things like the OtelCollector processor transform/logs above shows.

For me, the plain query always work! But my dev colleagues would certainly prefer the dropdown, so having it functional would be amazing, even though it is still in Beta. I'd love to bring my settings to something that works.

Metadata

Metadata

Labels

bugSomething isn't workingneed more infoFurther information is needed from the author.vl-datasource

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions