Skip to content

kubeone doesn't update NO_PROXY and no_proxy in kube-proxy Daemonset and static pods #3310

Closed
@JKBGIT1

Description

@JKBGIT1

What happened?

Kubeone didn't update the NO_PROXY and no_proxy env variables in static pods and kube-proxy DaemonSet.

I built a cluster with 1 master and 1 worker node. I utilized an HTTP proxy in that process. The cluster was built as expected. Then I added another worker node to the staticWorkers.hosts array and its public + private IPs to the proxy.noProxy attribute. The new node was added to the k8s cluster as expected, however, its public and private IPs weren't added to the NO_PROXY and no_proxy env variables in kube-proxy DaemonSet and the static pods in the cluster.

Expected behavior

The NO_PROXY and no_proxy env variables should be updated every time the user changes a proxy.noProxy configuration in the kubeone YAML file and runs kubeone apply.

How to reproduce the issue?

Create 3 VMs in any Cloud provider. They have to be connected through the private network and have public IPs.

Replace all the <> with the real values. Run kubeone apply -m <path-to-the-below-config> to build a cluster using your HTTP proxy server.

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
name: proxy-test
versions:
  kubernetes: 'v1.27.1'
features:
  coreDNS:
    replicas: 2
    deployPodDisruptionBudget: true
  nodeLocalDNS:
    deploy: false
clusterNetwork:
  cni:
    cilium:
      enableHubble: true
cloudProvider:
  none: {}
  external: false
apiEndpoint:
  host: '<master-public-IP>'
  port: 6443
controlPlane:
  hosts:
  - publicAddress: '<master-public-IP>'
    privateAddress: '<master-private-IP>'
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: master
    isLeader: true
    taints:
    - key: "node-role.kubernetes.io/control-plane"
      effect: "NoSchedule"
staticWorkers:
  hosts:
  - publicAddress: '<worker1-public-IP>'
    privateAddress: '<worker1-private-IP>'
    sshPort: 22
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: worker1
proxy:
  http: "http://<proxy-url>:<proxy-port>"
  https: "http://<proxy-url>:<proxy-port>"
  noProxy: "svc,<master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP>"
machineController:
  deploy: false

When the previous command finishes run kubectl describe daemonsets.apps -n kube-system kube-proxy and check the NO_PROXY and no_proxy values in the Environment. They will have <master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP> at the end, as expected. The same goes for all the static pods (kube-apiserver, kube-scheduler, kube-scheduler).

Take the configuration below, because it adds a new static worker node, and replace <> with the real values again. Then run kubeone apply -m <path-to-the-below-config>.

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
name: proxy-test
versions:
  kubernetes: 'v1.27.1'
features:
  coreDNS:
    replicas: 2
    deployPodDisruptionBudget: true
  nodeLocalDNS:
    deploy: false
clusterNetwork:
  cni:
    cilium:
      enableHubble: true
cloudProvider:
  none: {}
  external: false
apiEndpoint:
  host: '<master-public-IP>'
  port: 6443
controlPlane:
  hosts:
  - publicAddress: '<master-public-IP>'
    privateAddress: '<master-private-IP>'
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: master
    isLeader: true
    taints:
    - key: "node-role.kubernetes.io/control-plane"
      effect: "NoSchedule"
staticWorkers:
  hosts:
  - publicAddress: '<worker1-public-IP>'
    privateAddress: '<worker1-private-IP>'
    sshPort: 22
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: worker1
  - publicAddress: '<worker2-public-IP>'
    privateAddress: '<worker2-private-IP>'
    sshPort: 22
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: worker2
proxy:
  http: "http://<proxy-url>:<proxy-port>"
  https: "http://<proxy-url>:<proxy-port>"
  noProxy: "svc,<master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP>,<worker2-private-IP>,<worker2-public-IP>"
machineController:
  deploy: false

When the kubeone finishes run kubectl describe daemonsets.apps -n kube-system kube-proxy. You should see that <worker2-private-IP>,<worker2-public-IP> isn't in the NO_PROXY and no_proxy values.

What KubeOne version are you using?

$ kubeone version
{
  "kubeone": {
    "major": "1",
    "minor": "8",
    "gitVersion": "1.8.0",
    "gitCommit": "c280d14d95ac92a27576851cc058fc84562fcc55",
    "gitTreeState": "",
    "buildDate": "2024-05-14T15:41:44Z",
    "goVersion": "go1.22.3",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "machine_controller": {
    "major": "1",
    "minor": "59",
    "gitVersion": "v1.59.1",
    "gitCommit": "",
    "gitTreeState": "",
    "buildDate": "",
    "goVersion": "",
    "compiler": "",
    "platform": "linux/amd64"
  }
}

What cloud provider are you running on?

In this example, I spawned the VMs in Azure, but the same goes for Hetzner and AWS. I think it doesn't depend on the Cloud provider.

What operating system are you running in your cluster?

Ubuntu 22.04

Additional information

I use the Squid proxy as an HTTP proxy while building the k8s cluster.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/discussionlifecycle/rottenDenotes an issue or PR that has aged beyond stale.sig/cluster-managementDenotes a PR or issue as being assigned to SIG Cluster Management.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions