Skip to content

Conversation

@redscholar
Copy link
Contributor

@redscholar redscholar commented Sep 23, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

Should support etcd scaling:

  • scaling up etcd:
  1. Define the etcd nodes in inventory.yaml.
apiVersion: kubekey.kubesphere.io/v1
kind: Inventory
metadata:
  name: default
spec:
  hosts: # your can set all nodes here. or set nodes on special groups.
    node1:
      connector:
        host: 172.16.66.7
      internal_ipv4: 172.16.66.7
    node2:
      connector:
        host: 172.16.66.3
      internal_ipv4: 172.16.66.3
    node3:
      connector:
        host: 172.16.66.4
      internal_ipv4: 172.16.66.4
  groups:
    # all kubernetes nodes.
    k8s_cluster:
      groups:
        - kube_control_plane
        - kube_worker
    # control_plane nodes
    kube_control_plane:
      hosts:
        - node1
    # worker nodes
    kube_worker:
      hosts:
        - node1
    # etcd nodes when etcd_deployment_type is external
    etcd:
      hosts:
        - node1
  1. Create cluster by this inventory.yaml
    kk create cluster -i inventory.yaml
  2. Add etcd node in inventory.yaml
apiVersion: kubekey.kubesphere.io/v1
kind: Inventory
metadata:
  name: default
spec:
  hosts: # your can set all nodes here. or set nodes on special groups.
    node1:
      connector:
        host: 172.16.66.7
      internal_ipv4: 172.16.66.7
    node2:
      connector:
        host: 172.16.66.3
      internal_ipv4: 172.16.66.3
    node3:
      connector:
        host: 172.16.66.4
      internal_ipv4: 172.16.66.4
  groups:
    # all kubernetes nodes.
    k8s_cluster:
      groups:
        - kube_control_plane
        - kube_worker
    # control_plane nodes
    kube_control_plane:
      hosts:
        - node1
        - node2
        - node3
    # worker nodes
    kube_worker:
      hosts:
        - node1
        - node2
        - node3
    # etcd nodes when etcd_deployment_type is external
    etcd:
      hosts:
        - node1
        - node2
        - node3
  1. Scaling up etcd.
    kk add nodes -i inventory.yaml
  • scaling down etcd:
  1. Define the etcd nodes in inventory.yaml.
apiVersion: kubekey.kubesphere.io/v1
kind: Inventory
metadata:
  name: default
spec:
  hosts: # your can set all nodes here. or set nodes on special groups.
    node1:
      connector:
        host: 172.16.66.7
      internal_ipv4: 172.16.66.7
    node2:
      connector:
        host: 172.16.66.3
      internal_ipv4: 172.16.66.3
    node3:
      connector:
        host: 172.16.66.4
      internal_ipv4: 172.16.66.4
  groups:
    # all kubernetes nodes.
    k8s_cluster:
      groups:
        - kube_control_plane
        - kube_worker
    # control_plane nodes
    kube_control_plane:
      hosts:
        - node1
        - node2
        - node3
    # worker nodes
    kube_worker:
      hosts:
        - node1
        - node2
        - node3
    # etcd nodes when etcd_deployment_type is external
    etcd:
      hosts:
        - node1
        - node2
        - node3
  1. Create cluster by this inventory.yaml
    kk create cluster -i inventory.yaml
  2. Scaling down etcd
    kk delete nodes -i inventory.yaml node2 node3

test results

Which issue(s) this PR fixes:

Fixes #
#2728
#2742

Special notes for reviewers:

Does this PR introduced a user-facing change?

fix: support scaling down etcd

Additional documentation, usage docs, etc.:


@kubesphere-prow kubesphere-prow bot added release-note kind/feature Categorizes issue or PR as related to a new feature. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Sep 23, 2025
@redscholar redscholar changed the title fix: scaling down etcd fix: support scaling down etcd Sep 23, 2025
@kubesphere-prow kubesphere-prow bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 23, 2025
@redscholar redscholar force-pushed the feature branch 8 times, most recently from 31f9dd6 to 4a52909 Compare September 25, 2025 08:30
@redscholar redscholar changed the title fix: support scaling down etcd fix: support scaling up/down etcd Sep 25, 2025
@redscholar redscholar force-pushed the feature branch 6 times, most recently from 1f37627 to d9ed341 Compare September 26, 2025 08:53
@redscholar redscholar force-pushed the feature branch 5 times, most recently from 8bed8b8 to d4b84be Compare September 26, 2025 12:03
@redscholar redscholar force-pushed the feature branch 8 times, most recently from 799e9ce to 44b6b00 Compare November 18, 2025 02:37
@redscholar redscholar force-pushed the feature branch 3 times, most recently from 7fe1b8c to bf2eeee Compare November 19, 2025 09:02
@kubesphere-prow kubesphere-prow bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 19, 2025
@redscholar redscholar force-pushed the feature branch 8 times, most recently from a99b509 to 5763df7 Compare November 21, 2025 09:57
Signed-off-by: redscholar <[email protected]>
@sonarqubecloud
Copy link

sonarqubecloud bot commented Dec 4, 2025

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold kind/feature Categorizes issue or PR as related to a new feature. release-note size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants