Bug Description
Strimzi upgrade from 0.28 to 0.33.2 errors when trying to convert pods from StatefulSet to StrimziPodSet. I also received the error on the zookeeper pod set and I worked past it by manually deleting the statefulset. The StrimziPodSet took over after that, but all zookeper pods went down and then restarted.
I am worried that deleting the Kafka statefulset will do the same and cause an outage with our Kafka instances. How do we get around the operator not able to delete the statefulset? I have tried issuing a statefulset delete with --cascade=orphan, but that command hangs and never finishes.
status:
conditions:
- lastTransitionTime: '2023-04-05T03:01:36.365369407Z'
message: >-
"observe deletion of StatefulSet kafka/default-kafka-cluster-1-kafka"
timed out after 300000ms
reason: TimeoutException
status: 'True'
type: NotReady
Steps to reproduce
- Upgrade Strimzi Operator
- View Kafka Reconcile
- Upgrade starts
- Zookeper statefulset delete timeout
- Kafka statefulset delete timeout
Expected behavior
No response
Strimzi version
0.33.2
Kubernetes version
Kubernetes 1.22
Installation method
No response
Infrastructure
No response
Configuration files and logs
No response
Additional context
No response