Skip to content

[Bug]: Error deleting StatefulSet during Strimzi and Kafka upgrades. Conversion from StatefulSet to StrimziPodSet #8351

@charris-ca

Description

@charris-ca

Bug Description

Strimzi upgrade from 0.28 to 0.33.2 errors when trying to convert pods from StatefulSet to StrimziPodSet. I also received the error on the zookeeper pod set and I worked past it by manually deleting the statefulset. The StrimziPodSet took over after that, but all zookeper pods went down and then restarted.

I am worried that deleting the Kafka statefulset will do the same and cause an outage with our Kafka instances. How do we get around the operator not able to delete the statefulset? I have tried issuing a statefulset delete with --cascade=orphan, but that command hangs and never finishes.

status:
  conditions:
    - lastTransitionTime: '2023-04-05T03:01:36.365369407Z'
      message: >-
        "observe deletion of StatefulSet kafka/default-kafka-cluster-1-kafka"
        timed out after 300000ms
      reason: TimeoutException
      status: 'True'
      type: NotReady

Steps to reproduce

  1. Upgrade Strimzi Operator
  2. View Kafka Reconcile
  3. Upgrade starts
  4. Zookeper statefulset delete timeout
  5. Kafka statefulset delete timeout

Expected behavior

No response

Strimzi version

0.33.2

Kubernetes version

Kubernetes 1.22

Installation method

No response

Infrastructure

No response

Configuration files and logs

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions