|
| 1 | +--- |
| 2 | +id: automate-dns-records-creation-from-cce-ingresses-with-externaldns |
| 3 | +title: Automate DNS Records Creation from CCE Ingresses with ExternalDNS |
| 4 | +tags: [cce, dns, elb, externaldns] |
| 5 | +--- |
| 6 | + |
| 7 | +# Automate DNS Records Creation from CCE Ingresses with ExternalDNS |
| 8 | + |
| 9 | +[ExternalDNS](https://github.com/kubernetes-sigs/external-dns) is a Kubernetes component used to manage DNS records for services and applications running in a Kubernetes cluster. It automates the creation, update, and deletion of DNS records based on the state of resources within the cluster. ExternalDNS is typically employed in scenarios where you need to expose services running inside a Kubernetes cluster to the outside world with fully qualified domain names (FQDNs), ensuring they are accessible by external users. |
| 10 | + |
| 11 | +## Common Scenarios |
| 12 | + |
| 13 | +The most common use cases that ExternalDNS comes to apply are the following: |
| 14 | + |
| 15 | +| Scenario | Description | Use Case | |
| 16 | +|---------|-------------|------------------| |
| 17 | +| Exposing Services via Custom DNS Names | ExternalDNS automates DNS record creation for Kubernetes services, removing the need to manually manage DNS entries in providers like Open Telekom Cloud DNS, AWS Route 53, Google Cloud DNS, or Azure DNS. | You deploy an app and want it reachable at `app.example.com`. ExternalDNS automatically points the domain to the service’s Elastic IP. | |
| 18 | +| Automating DNS for Load Balancers | When using LoadBalancer-type services, the cloud provider assigns an Elastic IP. ExternalDNS creates DNS records that map your chosen FQDN to that IP. | A CCE LoadBalancer service is created, and ExternalDNS generates a DNS record mapping `api.example.com` to the public Elastic IP. | |
| 19 | +| Multi-Cluster or Multi-Region Deployments | ExternalDNS manages DNS records across clusters and regions, enabling routing strategies like geo-routing or latency-based routing. | An app runs in both Open Telekom Cloud regions (eu-de & eu-nl), and DNS automatically directs users to the closest cluster. | |
| 20 | +| Managing Dynamic or Short-Lived Services | In environments with frequent scaling or service churn (e.g., microservices or CI/CD), ExternalDNS keeps DNS records up to date. | As microservices scale or new versions roll out, ExternalDNS updates DNS records to reflect the current state. | |
| 21 | +| Integrating with Ingress Controllers | ExternalDNS manages DNS for hostnames defined in Ingress resources, ensuring DNS points to the correct Ingress endpoints. | An Ingress exposes `blog.example.com`, and ExternalDNS creates or updates the DNS record automatically. | |
| 22 | +| Cloud-Native DNS Management | Provides automated DNS management integrated with Open Telekom Cloud DNS for scalable, cloud-native Kubernetes workloads. | DNS entries for applications are automatically kept in sync with cluster state. | |
| 23 | +| Managing Wildcard DNS Records | ExternalDNS can handle wildcard DNS entries useful for multi-tenant or subdomain-based routing scenarios. | A wildcard DNS entry like `*.tenant.example.com` routes different tenants based on subdomains, with ExternalDNS maintaining required records. | |
| 24 | + |
| 25 | +## Configuring your registar |
| 26 | + |
| 27 | +We have to transfer the management of the NS-Records of your domain to the Domain Name Service of Open Telekom |
| 28 | +Cloud. Go on the site of your registar and make sure you configure the following: |
| 29 | + |
| 30 | +- Turn off any Dynamic DNS service for the domain or the subdomains you are going to bind with Open Telekom Cloud DNS. |
| 31 | +- Change the NS-Records of your domain or the subdomains to point to:`ns1.open-telekom-cloud.com` **and** `ns2.open-telekom-cloud.com` |
| 32 | + |
| 33 | +If those two prerequisites are met, then you are ready to configure a new DNS Public Zone and Record Sets for your domain in Open Telekom |
| 34 | +Cloud. We do have two mutually exclusive options to do that: |
| 35 | + |
| 36 | +- Create manually from Open Telekom Cloud Console, a new Public DNS Zone that binds to your domain and an A-Record in that zone that |
| 37 | + points to the EIP of the Elastic Load Balancer. |
| 38 | +- Automate everything using |
| 39 | + [ExternalDNS](https://github.com/kubernetes-sigs/external-dns). |
| 40 | + |
| 41 | +## Creating a dedicated DNS Service Account |
| 42 | + |
| 43 | +Go to *IAM management console*, and create a new User that permits |
| 44 | +**programmatic access** to Open Telekom Cloud resources: |
| 45 | + |
| 46 | + |
| 47 | + |
| 48 | +Grant this User the following permissions or add him directly to User |
| 49 | +Group `dns-admins` (if it exists, otherwise create it for a more rigid permissions management but that's completely optional) |
| 50 | + |
| 51 | + |
| 52 | + |
| 53 | +## Deploying ExternalDNS on CCE |
| 54 | + |
| 55 | +We are going to deploy ExternalDNS with Helm and we are going to specify [OpenStack's Designate](https://www.openstack.org/software/releases/dalmatian/components/designate) as the ExternalDNS provider via the [out-of-tree webhook](https://github.com/inovex/external-dns-openstack-webhook). |
| 56 | + |
| 57 | +1. Create **clouds.yaml** in your working directory: |
| 58 | + |
| 59 | +```yaml title="clouds.yaml" |
| 60 | +clouds: |
| 61 | + openstack: |
| 62 | + auth: |
| 63 | + auth_url: https://iam.eu-de.otc.t-systems.com:443/v3 |
| 64 | + username: "OTCAC_DNS_ServiceAccount" |
| 65 | + password: <OTCAC_DNS_ServiceAccount_PASSWORD> |
| 66 | + user_domain_name: "OTCXXXXXXXXXXXXXXXXXXXX" |
| 67 | + project_name: "eu-de_XXXXXXXXXXX" |
| 68 | + region_name: "eu-de" |
| 69 | + interface: "public" |
| 70 | + auth_type: "password" |
| 71 | +``` |
| 72 | +
|
| 73 | +:::warning |
| 74 | +Special attention is required here: although DNS is a global service, all changes must be made in the **eu-de** region. |
| 75 | +
|
| 76 | +::: |
| 77 | +
|
| 78 | +1. Create a namespace to isolate the installation (if it doesn't exist already) and deploy **clouds.yaml** as a `Secret`: |
| 79 | + |
| 80 | +```bash |
| 81 | +kubectl create namespace external-dns |
| 82 | +
|
| 83 | +kubectl create secret generic oscloudsyaml \ |
| 84 | + --namespace external-dns --from-file=clouds.yaml |
| 85 | +``` |
| 86 | + |
| 87 | +3. Create **overrides.yaml** in your working directory: |
| 88 | + |
| 89 | +```yaml title="overrides.yaml" |
| 90 | +policy: sync |
| 91 | +registry: txt |
| 92 | +txtOwnerId: "cce-blueprints" |
| 93 | +
|
| 94 | +ignoreIngressTLSSpec: true |
| 95 | +
|
| 96 | +sources: |
| 97 | + - crd |
| 98 | + - service |
| 99 | + - ingress |
| 100 | +
|
| 101 | +provider: |
| 102 | + name: webhook |
| 103 | + webhook: |
| 104 | + image: |
| 105 | + repository: ghcr.io/inovex/external-dns-openstack-webhook |
| 106 | + tag: 1.1.0 |
| 107 | + extraVolumeMounts: |
| 108 | + - name: oscloudsyaml |
| 109 | + mountPath: /etc/openstack/ |
| 110 | + resources: {} |
| 111 | +
|
| 112 | +extraVolumes: |
| 113 | + - name: oscloudsyaml |
| 114 | + secret: |
| 115 | + secretName: oscloudsyaml |
| 116 | +``` |
| 117 | + |
| 118 | +:::danger very important |
| 119 | +By specifying: |
| 120 | + |
| 121 | +- `sources` we instruct the ExternalDNS controller which resources it should watch and for which it should automatically create or update the corresponding A records. |
| 122 | +- `txtOwnerId`, we tell ExternalDNS to only touch records with the matching TXT record, and if that TXT record is missing, it knows to recreate both the A record AND the TXT record as a pair. `txtOwnerId` is **extremely important** because it prevents ExternalDNS from managing DNS records created by other tools or processes or have records deleted or ovewritten by other ExternalDNS instances that might be running in other clusters. **Use a different value for each ExternalDNS instance**. |
| 123 | + |
| 124 | +::: |
| 125 | + |
| 126 | +4. Deploy the helm chart using the above defined overrides: |
| 127 | + |
| 128 | +```shell |
| 129 | +helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/ |
| 130 | +helm repo update |
| 131 | +
|
| 132 | +helm upgrade --install external-dns external-dns/external-dns \ |
| 133 | + --namespace external-dns \ |
| 134 | + --create-namespace \ |
| 135 | + --values overrides.yaml |
| 136 | +``` |
| 137 | + |
| 138 | +## Verification |
| 139 | + |
| 140 | +:::important |
| 141 | +If you completed all these steps on a cluster that already exposes services through [NGINX Ingress Controllers](#option-2-configuring-an-ingress), and all components were configured correctly, ExternalDNS **will automatically create the corresponding A records** in the Open Telekom Cloud DNS service. |
| 142 | +::: |
| 143 | + |
| 144 | +### Option 1: Creating a DNSEndpoint |
| 145 | + |
| 146 | +We have now prepared everything needed to automatically provision a public DNS zone and a dedicated A record that links the Elastic IP of our Elastic Load Balancer to the FQDN of the subdomain configured earlier. To achieve this, we need to create a custom resource, based on the CRD installed by ExternalDNS, called `DNSEndpoint`. |
| 147 | + |
| 148 | +```yaml title="dns-endpoint.yaml" |
| 149 | +apiVersion: externaldns.k8s.io/v1alpha1 |
| 150 | +kind: DNSEndpoint |
| 151 | +metadata: |
| 152 | + name: keycloak |
| 153 | + namespace: keycloak |
| 154 | +spec: |
| 155 | + endpoints: |
| 156 | + - dnsName: keycloak.example.de |
| 157 | + recordTTL: 300 |
| 158 | + recordType: A |
| 159 | + targets: |
| 160 | + - XXX.XXX.XXX.XXX |
| 161 | +``` |
| 162 | + |
| 163 | +:::note |
| 164 | +Replace the placeholder value `XXX.XXX.XXX.XXX` of `targets` with the Elastic IP Address that is |
| 165 | +assigned to your Elastic Load Balancer. Additionally, replace the value of `dnsName` with the FQDN of your (sub)domain. |
| 166 | +::: |
| 167 | + |
| 168 | +Wait for a couple of seconds, till the reconciliation loop of the |
| 169 | +ExternalDNS controller is done, and if all went well you should now see |
| 170 | +the Record Sets of your Public Zone populated with various entries: |
| 171 | + |
| 172 | + |
| 173 | + |
| 174 | +### Option 2: Configuring an Ingress |
| 175 | + |
| 176 | +1. First let's create the manifests to deploy a demo workload based on [traefik/whoami](https://github.com/traefik/whoami): |
| 177 | + |
| 178 | +```yaml title="whoami.yaml" |
| 179 | +apiVersion: apps/v1 |
| 180 | +kind: Deployment |
| 181 | +metadata: |
| 182 | + name: whoami |
| 183 | + namespace: demo |
| 184 | +spec: |
| 185 | + replicas: 3 |
| 186 | + selector: |
| 187 | + matchLabels: |
| 188 | + app: whoami |
| 189 | + template: |
| 190 | + metadata: |
| 191 | + labels: |
| 192 | + app: whoami |
| 193 | + spec: |
| 194 | + containers: |
| 195 | + - name: whoami |
| 196 | + image: traefik/whoami:latest |
| 197 | + ports: |
| 198 | + - containerPort: 80 |
| 199 | +--- |
| 200 | +apiVersion: v1 |
| 201 | +kind: Service |
| 202 | +metadata: |
| 203 | + name: whoami-service |
| 204 | + namespace: demo |
| 205 | +spec: |
| 206 | + selector: |
| 207 | + app: whoami |
| 208 | + ports: |
| 209 | + - protocol: TCP |
| 210 | + port: 80 |
| 211 | + targetPort: 80 |
| 212 | + type: NodePort |
| 213 | +``` |
| 214 | + |
| 215 | +:::info |
| 216 | +**traefik/whoami** is a minimal Go webserver that prints OS information and HTTP request details. It’s often used to quickly inspect requests, debug routing, test load balancers, or expose services in containerized environments. |
| 217 | +::: |
| 218 | + |
| 219 | +and deploy it using **kubectl**: |
| 220 | + |
| 221 | +```bash |
| 222 | +kubectl create namespace demo |
| 223 | +kubectl apply -f whoami.yaml |
| 224 | +``` |
| 225 | + |
| 226 | +2. Before proceeding, ensure that the [ACME DNS‑01 solver](../cloud-container-engine/issue-an-acme-certificate-with-dns01-solver-in-cce/#installing-the-acme-dns01-solver) for Open Telekom Cloud is installed, along with the required [ClusterIssuer for Let's Encrypt](../cloud-container-engine/issue-an-acme-certificate-with-dns01-solver-in-cce/#installing-cluster-issuers). |
| 227 | + |
| 228 | +3. Next, we’ll expose this workload using an `Ingress`: |
| 229 | + |
| 230 | +```yaml title="whoami-ingress" |
| 231 | +apiVersion: networking.k8s.io/v1 |
| 232 | +kind: Ingress |
| 233 | +metadata: |
| 234 | + name: whoami-ingress |
| 235 | + namespace: demo |
| 236 | + annotations: |
| 237 | + cert-manager.io/cluster-issuer: opentelekomcloud-letsencrypt |
| 238 | +spec: |
| 239 | + ingressClassName: nginx |
| 240 | + tls: |
| 241 | + - hosts: |
| 242 | + - whoami.example.de |
| 243 | + secretName: whoami-example-de-tls |
| 244 | + rules: |
| 245 | + - host: "whoami.example.de" |
| 246 | + http: |
| 247 | + paths: |
| 248 | + - path: / |
| 249 | + pathType: ImplementationSpecific |
| 250 | + backend: |
| 251 | + service: |
| 252 | + name: whoami-service |
| 253 | + port: |
| 254 | + number: 80 |
| 255 | +``` |
| 256 | + |
| 257 | +:::note |
| 258 | +Replace the placeholder `whoami.example.de` with your own FQDN. After completing all steps, you should have the following resources: |
| 259 | + |
| 260 | +:white_check_mark: A **whoami** `Deployment` and `Service` |
| 261 | +:white_check_mark: A **whoami** `Ingress` served by the Ingress Controller with class name `nginx` |
| 262 | +:white_check_mark: A `whoami-example-de-tls` certificate automatically created by the Open Telekom Cloud ACME DNS-01 solver |
| 263 | +:white_check_mark: An A record and a TXT record in your `example.de` public zone, binding the EIP to `whoami.example.de` |
| 264 | +::: |
0 commit comments