Skip to content

[BUG] kafka backup failed after create topic and consumer msg #9999

@JashBook

Description

@JashBook

Describe the bug
A clear and concise description of what the bug is.

kbcli version
Kubernetes: v1.30.4-vke.10
KubeBlocks: 1.1.0-beta.1
kbcli: 1.0.2-beta.0

helm get notes -n kb-system kb-addon-kafka   
NOTES:
CHART NAME: kafka
CHART VERSION: 1.1.0-alpha.0
APP VERSION: 3.3.2

KubeBlocks Kafka server cluster definition, start create your Kafka Server Cluster with following command:

    kbcli cluster create kafka


Release Information:
  Commit ID: "a47f050055f51388303e58997ce4acef392a6ac3"
  Commit Time: "2025-12-26 11:30:22 +0800"
  Release Branch: "v1.1.0-beta.1"
  Release Time:  "2026-01-08 10:26:04 +0800"
  Enterprise: "false"

To Reproduce
Steps to reproduce the behavior:

  1. create cluster
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
  name: kafka-qtudyw
  namespace: default
  annotations:
    "kubeblocks.io/extra-env": '{"KB_KAFKA_ENABLE_SASL":"false","KB_KAFKA_BROKER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_CONTROLLER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_PUBLIC_ACCESS":"false"}'
spec:
  clusterDef: kafka
  topology: combined_monitor
  terminationPolicy: WipeOut
  componentSpecs:
    - name: kafka-combine
      tls: false
      disableExporter: true
      replicas: 1
      serviceVersion: 3.3.2
      services:
        - name: advertised-listener
          serviceType: ClusterIP
          podService: true
      resources:
        requests:
          cpu: 500m
          memory: 1Gi
        limits:
          cpu: 500m
          memory: 1Gi
      env:
        - name: KB_BROKER_DIRECT_POD_ACCESS
          value: "false"
        - name: KB_KAFKA_ENABLE_SASL_SCRAM
          value: "false"
      volumeClaimTemplates:
        - name: data
          spec:
            storageClassName: 
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
        - name: metadata
          spec:
            storageClassName: 
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
    - name: kafka-exporter
      replicas: 1
      disableExporter: true
      env:
        - name: KB_KAFKA_ENABLE_SASL_SCRAM
          value: "false"
      resources:
        requests:
          cpu: 500m
          memory: 1Gi
        limits:
          cpu: 500m
          memory: 1Gi
  1. backup topic success
kbcli cluster backup kafka-qtudyw --method topics 
Backup backup-default-kafka-qtudyw-20260113153258 created successfully, you can view the progress:
	kbcli cluster list-backups --names=backup-default-kafka-qtudyw-20260113153258 -n default

kubectl get backup
NAME                                         POLICY                                     METHOD   REPO                    STATUS      TOTAL-SIZE   DURATION   DELETION-POLICY   CREATION-TIME          COMPLETION-TIME        EXPIRATION-TIME
backup-default-kafka-qtudyw-20260113153258   kafka-qtudyw-kafka-combine-backup-policy   topics   backuprepo-kbcli-test   Completed   0            5s         Delete            2026-01-13T07:32:58Z   2026-01-13T07:33:03Z   
  1. create producer and consumer pod
kubectl run --namespace default kafka-qtudyw-kafka-producer --restart='Never' --image apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kafka:3.3.2-debian-11-r54 --command -- sleep infinity
pod/kafka-qtudyw-kafka-producer created

kubectl run --namespace default kafka-qtudyw-kafka-consumer --restart='Never' --image apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kafka:3.3.2-debian-11-r54 --command -- sleep infinity
pod/kafka-qtudyw-kafka-consumer created
  1. producer create topic and msg
kubectl exec -it kafka-qtudyw-kafka-producer -- bash
I have no name!@kafka-qtudyw-kafka-producer:/$ kafka-topics.sh --create --topic topic-xallj --bootstrap-server kafka-qtudyw-kafka-combine-advertised-listener-0.default.svc.cluster.local:9092
Created topic topic-xallj.
I have no name!@kafka-qtudyw-kafka-producer:/$ echo 'kbcli test msg:xallj' | kafka-console-producer.sh --topic topic-xallj --bootstrap-server kafka-qtudyw-kafka-combine-advertised-listener-0.default.svc.cluster.local:9092
I have no name!@kafka-qtudyw-kafka-producer:/$ 
  1. consumer get msg
kubectl exec -it kafka-qtudyw-kafka-consumer -- bash
I have no name!@kafka-qtudyw-kafka-consumer:/$ kafka-console-consumer.sh --from-beginning --timeout-ms 10000 --max-messages 1 --topic topic-xallj --bootstrap-server kafka-qtudyw-kafka-combine-advertised-listener-0.default.svc.cluster.local:9092
kbcli test msg:xallj
Processed a total of 1 messages
  1. backup topic failed
kbcli cluster backup kafka-qtudyw --method topics
Backup backup-default-kafka-qtudyw-20260113153628 created successfully, you can view the progress:
	kbcli cluster list-backups --names=backup-default-kafka-qtudyw-20260113153628 -n default
  1. see error
kubectl get backup 
NAME                                         POLICY                                     METHOD   REPO                    STATUS      TOTAL-SIZE   DURATION   DELETION-POLICY   CREATION-TIME          COMPLETION-TIME        EXPIRATION-TIME
backup-default-kafka-qtudyw-20260113153258   kafka-qtudyw-kafka-combine-backup-policy   topics   backuprepo-kbcli-test   Completed   0            5s         Delete            2026-01-13T07:32:58Z   2026-01-13T07:33:03Z   
backup-default-kafka-qtudyw-20260113153628   kafka-qtudyw-kafka-combine-backup-policy   topics   backuprepo-kbcli-test   Failed                              Delete            2026-01-13T07:36:28Z                          
➜  ~ kubectl get pod 
NAME                                                              READY   STATUS    RESTARTS   AGE
dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7abhgx6   0/2     Error     0          6m16s
dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5   0/2     Error     0          6m49s
dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7av6npt   0/2     Error     0          6m37s
kafka-qtudyw-kafka-combine-0                                      2/2     Running   0          11m
kafka-qtudyw-kafka-consumer                                       1/1     Running   0          9m26s
kafka-qtudyw-kafka-exporter-0                                     1/1     Running   0          11m
kafka-qtudyw-kafka-producer                                       1/1     Running   0          9m38s

describe error pod

kubectl describe pod dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5
Name:             dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5
Namespace:        default
Priority:         0
Service Account:  kubeblocks-dataprotection-worker
Node:             192.168.0.219/192.168.0.219
Start Time:       Tue, 13 Jan 2026 15:36:28 +0800
Labels:           app.kubernetes.io/instance=kafka-qtudyw
                  app.kubernetes.io/managed-by=kubeblocks-dataprotection
                  batch.kubernetes.io/controller-uid=192d9b87-c3ab-4da6-98b6-99b88e82b70e
                  batch.kubernetes.io/job-name=dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7a25738
                  controller-uid=192d9b87-c3ab-4da6-98b6-99b88e82b70e
                  dataprotection.kubeblocks.io/backup-name=backup-default-kafka-qtudyw-20260113153628
                  dataprotection.kubeblocks.io/backup-policy=kafka-qtudyw-kafka-combine-backup-policy
                  dataprotection.kubeblocks.io/backup-repo-name=backuprepo-kbcli-test
                  dataprotection.kubeblocks.io/backup-type=Full
                  dataprotection.kubeblocks.io/cluster-uid=9f9002ab-0128-4b34-9c9b-42030a976a04
                  job-name=dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7a25738
                  operations.kubeblocks.io/ops-name=backup-default-kafka-qtudyw-20260113153628
                  operations.kubeblocks.io/ops-type=Backup
Annotations:      vke.volcengine.com/cello-pod-evict-policy: allow
Status:           Failed
IP:               192.168.0.220
IPs:
  IP:           192.168.0.220
Controlled By:  Job/dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7a25738
Init Containers:
  dp-copy-datasafed:
    Container ID:  containerd://3a484412984b3ade248bb2dc412785e3fef466f7c674246fac9f26beafd02e62
    Image:         apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/datasafed:0.2.3
    Image ID:      apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      /scripts/install-datasafed.sh /bin/datasafed
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 13 Jan 2026 15:36:29 +0800
      Finished:     Tue, 13 Jan 2026 15:36:29 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:
      /bin/datasafed from dp-datasafed-bin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6t5k8 (ro)
Containers:
  backupdata:
    Container ID:  containerd://b89b7275132c45f3aff9ed583e4268d70f146ff53764346c072833aad8dd439e
    Image:         apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kafkactl:v5.15.0
    Image ID:      apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kafkactl@sha256:074305bab77f757d6e14555baa473982d16e0235c1fed6596aa51192c8ee0f73
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
      #!/bin/bash
      
      set -eo pipefail
      
      # Save backup status info file for syncing progress.
      # timeFormat: %Y-%m-%dT%H:%M:%SZ
      function DP_save_backup_status_info() {
        local totalSize=$1
        local startTime=$2
        local stopTime=$3
        local timeZone=$4
        local extras=$5
        local timeZoneStr=""
        if [ -n "${timeZone}" ]; then
          timeZoneStr=$(printf ',"timeZone":"%s"' "${timeZone}")
        fi
        if [ -z "${stopTime}" ]; then
          printf '{"totalSize":"%s"}' "${totalSize}" > "${DP_BACKUP_INFO_FILE}"
        elif [ -z "${startTime}" ]; then
          printf '{"totalSize":"%s","extras":[%s],"timeRange":{"end":"%s"%s}}' "${totalSize}" "${extras}" "${stopTime}" "${timeZoneStr}" > "${DP_BACKUP_INFO_FILE}"
        else
          printf '{"totalSize":"%s","extras":[%s],"timeRange":{"start":"%s","end":"%s"%s}}' "${totalSize}" "${extras}" "${startTime}" "${stopTime}" "${timeZoneStr}" > "${DP_BACKUP_INFO_FILE}"
        fi
      }
      
      # don't let kb's env affect kafkactl's config
      export TLS_ENABLED="false"
      # we'll use the internal listener to avoid using ssl
      export BROKERS="$DP_DB_HOST:9094"
      export PATH="$PATH:$DP_DATASAFED_BIN_PATH"
      export DATASAFED_BACKEND_BASE_PATH=${DP_BACKUP_BASE_PATH}
      
      if [[ $KB_KAFKA_SASL_ENABLE == "true" ]]; then
        echo "using sasl auth.."
        if [[ $KB_KAFKA_SASL_MECHANISMS != *"PLAIN"* ]]; then
          echo "unsupported KB_KAFKA_SASL_MECHANISMS: $KB_KAFKA_SASL_MECHANISMS"
          exit 1
        fi
        export SASL_ENABLED="true"
        export SASL_MECHANISM="plaintext"
        export SASL_USERNAME=$KAFKA_ADMIN_USER
        export SASL_PASSWORD=$KAFKA_ADMIN_PASSWORD
      fi
      
      #!/bin/bash
      
      # if the script exits with a non-zero exit code, touch a file to indicate that the backup failed,
      # the sync progress container will check this file and exit if it exists
      function handle_exit() {
        exit_code=$?
        if [ $exit_code -ne 0 ]; then
          echo "failed with exit code $exit_code"
          touch "${DP_BACKUP_INFO_FILE}.exit"
          exit $exit_code
        fi
      }
      
      trap handle_exit EXIT
      
      # topics.txt format is like:
      # (topic name)             (partitions)   (replication factor)
      # topic1                   1              1
      # topic2                   1              1
      #
      # We also ignores the __consumer_offsets topic as offsets won't be backuped up.
      echo "getting topics..."
      topic_list=$(kafkactl get topics | tail -n +2)
      if [[ -z $topic_list ]]; then
        echo "nothing to backup"
        DP_save_backup_status_info 0
        exit 0
      fi
      echo $topic_list | grep -v __consumer_offsets | datasafed push - topics.txt
      readarray -t topics < <(kafkactl get topics -o compact | grep -v  __consumer_offsets)
      
      for topic in "${topics[@]}"; do
        echo "backing up ${topic}..."
        kafkactl consume "${topic}" --from-beginning --print-keys --print-timestamps --exit --print-headers -o json-raw | datasafed push - "data/${topic}.json"
      done
      
      # use datasafed to get backup size
      # if we do not write into $DP_BACKUP_INFO_FILE, the backup job will stuck
      TOTAL_SIZE=$(datasafed stat / | grep TotalSize | awk '{print $2}')
      DP_save_backup_status_info "$TOTAL_SIZE"
      
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 13 Jan 2026 15:36:30 +0800
      Finished:     Tue, 13 Jan 2026 15:36:30 +0800
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:                        0
      memory:                     0
      vke.volcengine.com/eni-ip:  1
    Requests:
      cpu:                        0
      memory:                     0
      vke.volcengine.com/eni-ip:  1
    Environment Variables from:
      kafka-qtudyw-kafka-combine-env  ConfigMap  Optional: false
    Environment:
      KAFKA_ADMIN_USER:                                  <set to the key 'username' in secret 'kafka-qtudyw-kafka-combine-account-admin'>   Optional: false
      KAFKA_ADMIN_PASSWORD:                              <set to the key 'password' in secret 'kafka-qtudyw-kafka-combine-account-admin'>   Optional: false
      KAFKA_CLIENT_USER:                                 <set to the key 'username' in secret 'kafka-qtudyw-kafka-combine-account-client'>  Optional: false
      KAFKA_CLIENT_PASSWORD:                             <set to the key 'password' in secret 'kafka-qtudyw-kafka-combine-account-client'>  Optional: false
      BITNAMI_DEBUG:                                     true
      MY_POD_IP:                                          (v1:status.podIP)
      MY_POD_NAME:                                       dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5 (v1:metadata.name)
      MY_POD_HOST_IP:                                     (v1:status.hostIP)
      KAFKA_ENABLE_KRAFT:                                yes
      KAFKA_CFG_PROCESS_ROLES:                           broker,controller
      KAFKA_CFG_CONTROLLER_LISTENER_NAMES:               CONTROLLER
      KAFKA_CFG_INTER_BROKER_LISTENER_NAME:              INTERNAL
      KAFKA_CFG_LISTENERS:                               CONTROLLER://:9093,INTERNAL://:9094,CLIENT://:9092
      KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP:          CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT
      KAFKA_CFG_ADVERTISED_LISTENERS:                    INTERNAL://$(MY_POD_IP):9094,CLIENT://$(MY_POD_IP):9092
      KAFKA_CFG_INITIAL_BROKER_REGISTRATION_TIMEOUT_MS:  240000
      ALLOW_PLAINTEXT_LISTENER:                          yes
      JMX_PORT:                                          5555
      KAFKA_VOLUME_DIR:                                  /bitnami/kafka
      KAFKA_CFG_METADATA_LOG_DIR:                        /bitnami/kafka/metadata
      KAFKA_LOG_DIR:                                     /bitnami/kafka/data
      KAFKA_HEAP_OPTS:                                   -XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64
      SERVER_PROP_FILE:                                  /scripts/server.properties
      KAFKA_KRAFT_CLUSTER_ID:                            $(CLUSTER_UID)
      KAFKA_CFG_SUPER_USERS:                             User:$(KAFKA_ADMIN_USER)
      KAFKA_DYNAMIC_CREDENTIAL_FILE:                     /accounts/accounts-mount/accounts
      KB_BROKER_DIRECT_POD_ACCESS:                       false
      KB_KAFKA_ENABLE_SASL_SCRAM:                        false
      DP_BACKUP_NAME:                                    backup-default-kafka-qtudyw-20260113153628
      DP_TARGET_POD_NAME:                                kafka-qtudyw-kafka-combine-0
      DP_TARGET_POD_ROLE:                                
      DP_BACKUP_BASE_PATH:                               /default/kafka-qtudyw-9f9002ab-0128-4b34-9c9b-42030a976a04/kafka-combine/backup-default-kafka-qtudyw-20260113153628
      DP_BACKUP_INFO_FILE:                               /dp-manager/backup.info
      DP_TTL:                                            
      DP_DB_HOST:                                        kafka-qtudyw-kafka-combine-0.kafka-qtudyw-kafka-combine-headless
      DP_DB_PORT:                                        9092
      KB_CLUSTER_UID:                                    9f9002ab-0128-4b34-9c9b-42030a976a04
      KB_CLUSTER_NAME:                                   kafka-qtudyw
      KB_COMP_NAME:                                      kafka-combine
      KB_NAMESPACE:                                      default
      DP_DATASAFED_BIN_PATH:                             /bin/datasafed
    Mounts:
      /bin/datasafed from dp-datasafed-bin (rw)
      /dp-manager from manager-shared-volume (rw)
      /etc/datasafed from dp-datasafed-config (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6t5k8 (ro)
  manager:
    Container ID:  containerd://cf6a5b6327f47c67c211ee1e019d21f4393da1942a5e3d66eb0bbbcb87b430ad
    Image:         apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:1.1.0-beta.1
    Image ID:      apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools@sha256:5c9b8df31ded4c1b7f4a70d9f5edc7351e69322cd6afb1253a542c1647fed305
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
    Args:
      
      set -o errexit
      set -o nounset
      
      export PATH="$PATH:$DP_DATASAFED_BIN_PATH"
      export DATASAFED_BACKEND_BASE_PATH="$DP_BACKUP_BASE_PATH"
      
      backup_info_file="${DP_BACKUP_INFO_FILE}"
      sleep_seconds="${DP_CHECK_INTERVAL}"
      namespace="default"
      backup_name="backup-default-kafka-qtudyw-20260113153628"
      
      if [ "$sleep_seconds" -le 0 ]; then
        sleep_seconds=30
      fi
      
      exit_file="${backup_info_file}.exit"
      while true; do
        if [ -f "$exit_file" ]; then
          echo "exit file $exit_file exists, exit"
          exit 1
        fi
        if [ -f "$backup_info_file" ]; then
          break
        fi
        echo "backup info file not exists, wait for ${sleep_seconds}s"
        sleep "$sleep_seconds"
      done
      
      backup_info=$(cat "$backup_info_file")
      echo "backupInfo:${backup_info}"
      
      status="{\"status\":${backup_info}}"
      kubectl -n "$namespace" patch backups.dataprotection.kubeblocks.io "$backup_name" --subresource=status --type=merge --patch "${status}"
      
      # save the backup CR object to the backup repo
      kubectl -n "$namespace" get backups.dataprotection.kubeblocks.io "$backup_name" -o json | datasafed push - "/kubeblocks-backup.json"
      
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 13 Jan 2026 15:36:30 +0800
      Finished:     Tue, 13 Jan 2026 15:36:30 +0800
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:     0
      memory:  0
    Environment Variables from:
      kafka-qtudyw-kafka-combine-env  ConfigMap  Optional: false
    Environment:
      KAFKA_ADMIN_USER:                                  <set to the key 'username' in secret 'kafka-qtudyw-kafka-combine-account-admin'>   Optional: false
      KAFKA_ADMIN_PASSWORD:                              <set to the key 'password' in secret 'kafka-qtudyw-kafka-combine-account-admin'>   Optional: false
      KAFKA_CLIENT_USER:                                 <set to the key 'username' in secret 'kafka-qtudyw-kafka-combine-account-client'>  Optional: false
      KAFKA_CLIENT_PASSWORD:                             <set to the key 'password' in secret 'kafka-qtudyw-kafka-combine-account-client'>  Optional: false
      BITNAMI_DEBUG:                                     true
      MY_POD_IP:                                          (v1:status.podIP)
      MY_POD_NAME:                                       dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5 (v1:metadata.name)
      MY_POD_HOST_IP:                                     (v1:status.hostIP)
      KAFKA_ENABLE_KRAFT:                                yes
      KAFKA_CFG_PROCESS_ROLES:                           broker,controller
      KAFKA_CFG_CONTROLLER_LISTENER_NAMES:               CONTROLLER
      KAFKA_CFG_INTER_BROKER_LISTENER_NAME:              INTERNAL
      KAFKA_CFG_LISTENERS:                               CONTROLLER://:9093,INTERNAL://:9094,CLIENT://:9092
      KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP:          CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT
      KAFKA_CFG_ADVERTISED_LISTENERS:                    INTERNAL://$(MY_POD_IP):9094,CLIENT://$(MY_POD_IP):9092
      KAFKA_CFG_INITIAL_BROKER_REGISTRATION_TIMEOUT_MS:  240000
      ALLOW_PLAINTEXT_LISTENER:                          yes
      JMX_PORT:                                          5555
      KAFKA_VOLUME_DIR:                                  /bitnami/kafka
      KAFKA_CFG_METADATA_LOG_DIR:                        /bitnami/kafka/metadata
      KAFKA_LOG_DIR:                                     /bitnami/kafka/data
      KAFKA_HEAP_OPTS:                                   -XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64
      SERVER_PROP_FILE:                                  /scripts/server.properties
      KAFKA_KRAFT_CLUSTER_ID:                            $(CLUSTER_UID)
      KAFKA_CFG_SUPER_USERS:                             User:$(KAFKA_ADMIN_USER)
      KAFKA_DYNAMIC_CREDENTIAL_FILE:                     /accounts/accounts-mount/accounts
      KB_BROKER_DIRECT_POD_ACCESS:                       false
      KB_KAFKA_ENABLE_SASL_SCRAM:                        false
      DP_BACKUP_NAME:                                    backup-default-kafka-qtudyw-20260113153628
      DP_TARGET_POD_NAME:                                kafka-qtudyw-kafka-combine-0
      DP_TARGET_POD_ROLE:                                
      DP_BACKUP_BASE_PATH:                               /default/kafka-qtudyw-9f9002ab-0128-4b34-9c9b-42030a976a04/kafka-combine/backup-default-kafka-qtudyw-20260113153628
      DP_BACKUP_INFO_FILE:                               /dp-manager/backup.info
      DP_TTL:                                            
      DP_DB_HOST:                                        kafka-qtudyw-kafka-combine-0.kafka-qtudyw-kafka-combine-headless
      DP_DB_PORT:                                        9092
      KB_CLUSTER_UID:                                    9f9002ab-0128-4b34-9c9b-42030a976a04
      KB_CLUSTER_NAME:                                   kafka-qtudyw
      KB_COMP_NAME:                                      kafka-combine
      KB_NAMESPACE:                                      default
      DP_DATASAFED_BIN_PATH:                             /bin/datasafed
      DP_CHECK_INTERVAL:                                 5
    Mounts:
      /bin/datasafed from dp-datasafed-bin (rw)
      /dp-manager from manager-shared-volume (rw)
      /etc/datasafed from dp-datasafed-config (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6t5k8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  manager-shared-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  dp-datasafed-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tool-config-backuprepo-kbcli-test-q46rcm
    Optional:    false
  dp-datasafed-bin:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-6t5k8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 kb-controller=true:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                           Age                    From               Message
  ----     ------                           ----                   ----               -------
  Normal   Scheduled                        7m24s                  default-scheduler  Successfully assigned default/dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5 to 192.168.0.219
  Normal   Pulled                           7m23s                  kubelet            Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/datasafed:0.2.3" already present on machine
  Normal   Created                          7m23s                  kubelet            Created container dp-copy-datasafed
  Normal   Started                          7m23s                  kubelet            Started container dp-copy-datasafed
  Warning  FailedToRetrieveImagePullSecret  7m22s (x2 over 7m23s)  kubelet            Unable to retrieve some image pull secrets (kbcli-test-registry-key); attempting to pull the image may not succeed.
  Normal   Pulled                           7m22s                  kubelet            Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kafkactl:v5.15.0" already present on machine
  Normal   Created                          7m22s                  kubelet            Created container backupdata
  Normal   Started                          7m22s                  kubelet            Started container backupdata
  Normal   Pulled                           7m22s                  kubelet            Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:1.1.0-beta.1" already present on machine
  Normal   Created                          7m22s                  kubelet            Created container manager
  Normal   Started                          7m22s                  kubelet            Started container manager

logs error pod

kubectl logs dp-backup-0-backup-default-kafka-qtudyw-20260113153628-d7aj7zm5
Defaulted container "backupdata" out of: backupdata, manager, dp-copy-datasafed (init)
getting topics...
failed with exit code 1

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Metadata

Metadata

Assignees

Labels

kind/bugSomething isn't working

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions