Skip to content

msg="Error getting volume snapshotter for volume snapshot location" #183

@jadsy2107

Description

@jadsy2107

I've set up a MiniO and its working correctly doing backups, just not the volumes - the pvc data isn't getting backed up

time="2022-11-16T01:41:39Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 namespace= persistentVolume=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 resource=persistentvolumes volumeSnapshotLocation=default
time="2022-11-16T01:41:39Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 namespace= persistentVolume=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 resource=persistentvolumes volumeSnapshotLocation=default
velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.2.1,openebs/velero-plugin:1.9.0 \
    --bucket velero \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=true \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://<IP OF MINIO>:9000

My snapshot location as example 06-volumesnapshotlocation.yaml
So I applied the below to the cluster:

---
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/cstor-blockstore
  config:
    bucket: velero
    provider: aws
    region: minio
    namespace: openebs
    restoreAllIncrementalSnapshots: "false"
    autoSetTargetIP: "true"
    restApiTimeout: 1m

Then i try to take backup

velero create backup npm --include-namespaces npm --snapshot-volumes
velero backup logs npm|grep error
time="2022-11-16T02:12:43Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 namespace= persistentVolume=pvc-bb7f6461-315f-4976-8fdd-100e5139fb21 resource=persistentvolumes volumeSnapshotLocation=default
time="2022-11-16T02:12:43Z" level=error msg="Error getting volume snapshotter for volume snapshot location" backup=velero/npm error="rpc error: code = Unknown desc = Error fetching OpenEBS rest client address" error.file="/home/travis/gopath/src/github.com/openebs/velero-plugin/pkg/cstor/cstor.go:192" error.function="github.com/openebs/velero-plugin/pkg/cstor.(*Plugin).Init" logSource="pkg/backup/item_backupper.go:470" name=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 namespace= persistentVolume=pvc-e383c200-6da8-416b-95c0-a6365eaa7961 resource=persistentvolumes volumeSnapshotLocation=default

Only the meta data is backup up - which is amazing, but i need the data too !
Screen Shot 2022-11-16 at 1 15 03 pm

I installed openebs cstor from the helm with these values

# Default values for cstor-operators.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

release:
  version: "3.4.0"

# If false, openebs NDM sub-chart will not be installed
openebsNDM:
  enabled: true

rbac:
  # rbac.create: `true` if rbac resources should be created
  create: true
  # rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
  pspEnabled: false

imagePullSecrets:
# - name: "image-pull-secret"

cspcOperator:
  componentName: cspc-operator
  poolManager:
    image:
      registry:
      repository: openebs/cstor-pool-manager
      tag: 3.4.0
  cstorPool:
    image:
      registry:
      repository: openebs/cstor-pool
      tag: 3.4.0
  cstorPoolExporter:
    image:
      registry:
      repository: openebs/m-exporter
      tag: 3.4.0
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cspc-operator
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  annotations: {}
  resyncInterval: "30"
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}
  baseDir: "/var/openebs"
  sparseDir: "/var/openebs/sparse"

cvcOperator:
  componentName: cvc-operator
  target:
    image:
      registry:
      repository: openebs/cstor-istgt
      tag: 3.4.0
  volumeMgmt:
    image:
      registry:
      repository: openebs/cstor-volume-manager
      tag: 3.4.0
  volumeExporter:
    image:
      registry:
      repository: openebs/m-exporter
      tag: 3.4.0
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cvc-operator
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  annotations: {}
  resyncInterval: "30"
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}
  baseDir: "/var/openebs"
  logLevel: "2"

csiController:
  priorityClass:
    create: true
    name: cstor-csi-controller-critical
    value: 900000000
  componentName: "openebs-cstor-csi-controller"
  logLevel: "5"
  resizer:
    name: "csi-resizer"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-resizer
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v1.2.0
  snapshotter:
    name: "csi-snapshotter"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-snapshotter
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.0.3
  snapshotController:
    name: "snapshot-controller"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/snapshot-controller
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.0.3
  attacher:
    name: "csi-attacher"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-attacher
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.1.0
  provisioner:
    name: "csi-provisioner"
    image:
      # Make sure that registry name end with a '/'.
      # For example : quay.io/ is a correct value here and quay.io is incorrect
      registry: k8s.gcr.io/
      repository: sig-storage/csi-provisioner
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v3.0.0
  annotations: {}
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}

cstorCSIPlugin:
  name: cstor-csi-plugin
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cstor-csi-driver
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  remount: "true"

csiNode:
  priorityClass:
    create: true
    name: cstor-csi-node-critical
    value: 900001000
  componentName: "openebs-cstor-csi-node"
  driverRegistrar:
    name: "csi-node-driver-registrar"
    image:
      registry: k8s.gcr.io/
      repository: sig-storage/csi-node-driver-registrar
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: v2.3.0
  logLevel: "5"
  updateStrategy:
    type: RollingUpdate
  annotations: {}
  podAnnotations: {}
  resources: {}
  # limits:
  #   cpu: 10m
  #   memory: 32Mi
  # requests:
  #   cpu: 10m
  #   memory: 32Mi
  ## Labels to be added to openebs-cstor-csi-node pods
  podLabels: {}
  # kubeletDir path can be configured to run on various different k8s distributions like
  # microk8s where kubelet root dir is not (/var/lib/kubelet/). For example microk8s,
  # we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`
  kubeletDir: "/var/lib/kubelet/"
  nodeSelector: {}
  tolerations: []
  securityContext: {}

csiDriver:
  create: true
  podInfoOnMount: true
  attachRequired: false

admissionServer:
  componentName: cstor-admission-webhook
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: openebs/cstor-webhook
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: 3.4.0
  failurePolicy: "Fail"
  annotations: {}
  podAnnotations: {}
  podLabels: {}
  nodeSelector: {}
  tolerations: []
  resources: {}
  securityContext: {}

serviceAccount:
  # Annotations to add to the service account
  annotations: {}
  cstorOperator:
    create: true
    name: openebs-cstor-operator
  csiController:
    # Specifies whether a service account should be created
    create: true
    name: openebs-cstor-csi-controller-sa
  csiNode:
    # Specifies whether a service account should be created
    create: true
    name: openebs-cstor-csi-node-sa

analytics:
  enabled: true
  # Specify in hours the duration after which a ping event needs to be sent.
  pingInterval: "24h"

cleanup:
  image:
    # Make sure that registry name end with a '/'.
    # For example : quay.io/ is a correct value here and quay.io is incorrect
    registry:
    repository: bitnami/kubectl
    tag:

kubectl get VolumeSnapshotClass -o yaml

apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1
  deletionPolicy: Delete
  driver: cstor.csi.openebs.io
  kind: VolumeSnapshotClass
  metadata:
    annotations:
      meta.helm.sh/release-name: openebs-cstor
      meta.helm.sh/release-namespace: openebs
      snapshot.storage.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2022-11-15T11:53:35Z"
    generation: 1
    labels:
      app.kubernetes.io/managed-by: Helm
    name: csi-cstor-snapshotclass
    resourceVersion: "4242"
    uid: 57d21003-068b-4fe5-87bc-a3b4f4118db0
kind: List
metadata:
  resourceVersion: ""

Metadata

Metadata

Assignees

No one assigned

    Labels

    Bugissue/pr is a bug/fix to existing feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions