This repository was archived by the owner on Mar 26, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 195
KRaft Support
Darren Lau edited this page Aug 7, 2023
·
4 revisions
We would like to add KRaft support to Koperator to adapt the this important change of Kafka itself and there are quite some notable changes on Kafka itself in KRaft world compared to in ZooKeeper world:
| What | Why | Impact on Koperator |
|---|---|---|
| There are three potential roles that a Kafka node can: broker, controller, or both (e.g. broker+controller, but this is not recommended for production per Kafka's suggestion) | The controller processes are to replace the ZooKeeper nodes to manage the cluster metadata | "broker" is no longer generic enough to represent any Kafka nodes in the Kafka cluster, the KafkaCluster will need to be updated to reflect this fact. |
The DescribeClusterRequest API no longer exposes the active controller (in fact, any of the controller nodes), source code reference: click me. |
Kafka tries to isolate controller access from the admin client in the KRaft world. Old admin clients who send requests directly to the controller will be given a random broker id, and the reply on the random broker to forward the original requests. | The determineControllerId is essentially deprecated in KRaft world, and therefore the reorderBrokers logic can no longer take the controller ID into consideration. In fact, in KRaft world, re-electing active controller is not as expensive as it was in ZooKeeper world, because the non-active controllers will try to have the up-to-date metadata stored in memory and in disk (these are call "hot stand-by" controllers ). |
How should the KafkaCluster API be updated to reflect the changes in KRaft world?
- Leave the
Brokerstruct as-is and add a couple of structs namedControllersandCombinedNodesto theKafkaClusterSpecto represent the corresponding nodes:type KafkaClusterSpec struct { // ControllerMode specifies the Kafka cluster in either ZooKeeper or KRaft mode. // +kubebuilder:validation:Enum=kraft;zookeeper // +optional ControllerMode ControllerMode `json:"controllerMode,omitempty"` // other existing fields are intentionally ignored here Brokers []Broker `json:"brokers"` Controllers []Controller `json:"controllers,omitempty"` CombinedNodes []CombinedNode `json:"combinedNodes,omitempty"` } // Controller defines basic configurations for controllers (in KRaft) type Controller struct { Id int32 `json:"id"` ReadOnlyConfig string `json:"readOnlyConfig,omitempty"` ControllerConfig *ControllerConfig `json:"controllerConfig"` } type ControllerConfig struct { // Use the existing BrokerConfig as a blueprint to add/remove corresponding fields from the BrokerConfig // reference of BrokerConfig: https://github.com/banzaicloud/koperator/blob/master/api/v1beta1/kafkacluster_types.go#L19 } // Note: need to find a way to merge the BrokerConfig and ControllerConfig nicely type CombinedNode struct { Id int32 `json:"id"` ReadOnlyConfig string `json:"readOnlyConfig,omitempty"` BrokerConfig *BrokerConfig `json:"brokerConfig,omitempty"` ControllerConfig *ControllerConfig `json:"controllerConfig,omitempty"` }
- Extract the common configurations that are applicable to both broker and controller from current
BrokerConfig. And allow users to treat one of the broker node to be the combined role (mainly for development usage):type KafkaClusterSpec struct { // ControllerMode specifies the Kafka cluster in either ZooKeeper or KRaft mode. // +kubebuilder:validation:Enum=kraft;zookeeper // +optional ControllerMode ControllerMode `json:"controllerMode,omitempty"` // other existing fields are intentionally ignored here Brokers []Broker `json:"brokers"` Controllers []Controller `json:"controllers,omitempty"` } type Broker struct { Id int32 `json:"id"` BrokerConfigGroup string `json:"brokerConfigGroup,omitempty"` ReadOnlyConfig string `json:"readOnlyConfig,omitempty"` BrokerConfig *BrokerConfig `json:"brokerConfig,omitempty"` // CombinedNode indicates if this broker node is a combined (broker + controller) node in KRaft mode. If set to true, // Koperator assumes the ReadOnlyConfig would include the read-only configurations for both the controller and broker processes. // This is default to false; if set to true in ZooKeeper mode, Koperator will ignore this configuration. // +optional CombinedNode bool `json:"combinedNode,omitempty"` } // BrokerConfig defines the broker configurations type BrokerConfig struct { CommonConfig `json:",inline"` BrokerSpecificConfig `json:",inline"` } // BrokerSpecificConfig defines the configurations that are only applicable to brokers type BrokerSpecificConfig struct { BrokerIngressMapping []string `json:"brokerIngressMapping,omitempty"` Config string `json:"config,omitempty"` MetricsReporterImage string `json:"metricsReporterImage,omitempty"` NetworkConfig *NetworkConfig `json:"networkConfig,omitempty"` NodePortExternalIP map[string]string `json:"nodePortExternalIP,omitempty"` NodePortNodeAddressType corev1.NodeAddressType `json:"nodePortNodeAddressType,omitempty"` } // Controller represents "controller" nodes in KRaft. This is not applicable to ZooKeeper mode type Controller struct { Id int32 `json:"id"` ReadOnlyConfig string `json:"readOnlyConfig,omitempty"` ControllerConfig *ControllerConfig `json:"controllerConfig,omitempty"` } // ControllerConfig defines the controller configurations in KRaft. This section is ignored in ZooKeeper-mode. type ControllerConfig struct { CommonConfig `json:",inline"` ControllerSpecificConfig `json:",inline"` } // ControllerSpecificConfig defines the controller-specific configurations in KRaft type ControllerSpecificConfig struct { } // CommonConfig holds the common configurations that are applicable to both the "brokers" and "controllers" (in KRaft term) // In ZooKeeper-mode, this is just a subset of the old BrokerConfig type CommonConfig struct { Affinity *corev1.Affinity `json:"affinity,omitempty"` Annotations map[string]string `json:"annotations,omitempty"` Containers []corev1.Container `json:"containers,omitempty"` Envs []corev1.EnvVar `json:"envs,omitempty"` Image string `json:"image,omitempty"` ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"` InitContainers []corev1.Container `json:"initContainers,omitempty"` KafkaHeapOpts string `json:"kafkaHeapOpts,omitempty"` KafkaJVMPerfOpts string `json:"kafkaJvmPerfOpts,omitempty"` Labels map[string]string `json:"labels,omitempty"` Log4jConfig string `json:"log4jConfig,omitempty"` NodeSelector map[string]string `json:"nodeSelector,omitempty"` PodSecurityContext *corev1.PodSecurityContext `json:"podSecurityContext,omitempty"` PriorityClassName string `json:"priorityClassName,omitempty"` Resources *corev1.ResourceRequirements `json:"resourceRequirements,omitempty"` ServiceAccountName string `json:"serviceAccountName,omitempty"` SecurityContext *corev1.SecurityContext `json:"securityContext,omitempty"` StorageConfigs []StorageConfig `json:"storageConfigs,omitempty"` TerminationGracePeriod *int64 `json:"terminationGracePeriodSeconds,omitempty"` Tolerations []corev1.Toleration `json:"tolerations,omitempty"` VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"` Volumes []corev1.Volume `json:"volumes,omitempty"` }
Chosen option: 2
How should the non-broker nodes (e.g. controller, or combined) be deployed and managed by Koperator?