-
I have a k3s cluster deployed with k3, and here is my apiVersion: k3d.io/v1alpha3
kind: Simple
name: fredcorp
servers: 2
agents: 0
image: k3s:v1.21.5-k3s2-alpine314
volumes:
- volume: /home/fred/k3s-config/:/var/lib/rancher/k3s/server/manifests/
nodeFilters:
- server:*
ports:
- port: 5080:80
nodeFilters:
- loadbalancer
- port: 5443:443
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: 2m0s
k3s:
extraArgs:
- arg: --tls-san=192.168.0.150
nodeFilters:
- server:*
- arg: --no-deploy=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true Cluster is working fine but I need my pods to access external services hosted on my private local network, so I modified the coredns apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
prometheus :9153
forward . 192.168.0.200
cache 30
loop
reload
loadbalance
}
NodeHosts: |
172.20.0.1 host.k3d.internal
172.20.0.4 k3d-fredcorp-serverlb
172.20.0.2 k3d-fredcorp-server-1
172.20.0.3 k3d-fredcorp-server-0
kind: ConfigMap
metadata:
annotations:
objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
objectset.rio.cattle.io/owner-name: coredns
objectset.rio.cattle.io/owner-namespace: kube-system
labels:
objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57 But for unknown reason, pods are not resolving entries : [root@workstation ~ ]$ k exec -it -n all-in-one all-in-one-all-in-one-alpine-d965d8454-flqbv -- bash
bash-5.1# ping vault.fredcorp.com
ping: bad address 'vault.fredcorp.com'
bash-5.1#
bash-5.1# ping adguard.fredcorp.com
PING adguard.fredcorp.com (192.168.0.200): 56 data bytes
64 bytes from 192.168.0.200: seq=0 ttl=62 time=0.381 ms
64 bytes from 192.168.0.200: seq=1 ttl=62 time=0.377 ms
^C
--- adguard.fredcorp.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.377/0.379/0.381 ms
bash-5.1#
bash-5.1# ping google.fr
PING google.fr (216.58.213.163): 56 data bytes
64 bytes from 216.58.213.163: seq=0 ttl=115 time=13.593 ms
64 bytes from 216.58.213.163: seq=1 ttl=115 time=12.760 ms
^C
--- google.fr ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 12.760/13.176/13.593 ms
bash-5.1#
bash-5.1# nslookup
> adguard.fredcorp.com
Server: 10.43.0.10
Address: 10.43.0.10#53
Non-authoritative answer:
Name: adguard.fredcorp.com
Address: 192.168.0.200
> qnap.fredcorp.com
Server: 10.43.0.10
Address: 10.43.0.10#53
Non-authoritative answer:
Name: qnap.fredcorp.com
Address: 192.168.0.250
** server can't find qnap.fredcorp.com: NXDOMAIN
> vault.fredcorp.com
Server: 10.43.0.10
Address: 10.43.0.10#53
Non-authoritative answer:
Name: vault.fredcorp.com
Address: 192.168.0.201
** server can't find vault.fredcorp.com: NXDOMAIN Here only Any idea ? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hi @ixxeL2097 , thanks for starting this discussion! |
Beta Was this translation helpful? Give feedback.
-
Hi @iwilltry42, you're welcome ! I reproduced this issue on a vanilla kubernetes cluster (not k3s), same symptoms. Anyway, I just wanted to inform people about this issue, be careful with alpine images inside Kubernetes cluster for the moment. Maybe soon we will have a fix, I hope so. EDIT: |
Beta Was this translation helpful? Give feedback.
Hi @iwilltry42, you're welcome !
I have finally found the root cause of my issue. The problem seems to be Alpine docker images. Indeed, everything is working well in my setup, and I have been able to get it working with ubuntu/debian images but alpine images have apparently a known issue with DNS resolution inside kubernetes (https://stackoverflow.com/questions/65181012/does-alpine-have-known-dns-issue-within-kubernetes).
I reproduced this issue on a vanilla kubernetes cluster (not k3s), same symptoms. Anyway, I just wanted to inform people about this issue, be careful with alpine images inside Kubernetes cluster for the moment. Maybe soon we will have a fix, I hope so.
EDIT:
I managed to…