Replies: 4 comments 5 replies
-
|
AWS:
|
Beta Was this translation helpful? Give feedback.
-
|
We will need DNS outside of K8S to route to applications running on k8s. For instance, a user accessing jupyter.k8tre.org should be routed to the correct loadbalancer IP address from where the on-cluster I agree that this will need to be delegated to the organisations. But, if we can all use the same internal domain private DNS zone names and entries for k8tre, we can have have different public FQDNs pointing to our individual organisation's IP address(es), resolving to the same private DNS entries and ingress should still work. I am not sure about on-prem yet. I have previously setup DataSHIELD on microk8s behind nginx with a custom host name that was set by modifying the TL;DR: K8TRE should recommend options but agree that this shouldn't be part of the core implementation. |
Beta Was this translation helpful? Give feedback.
-
graph TD
subgraph Public Internet
A[Public Internet] --> B(Public DNS Resolver);
end
subgraph Azure VNet Private
subgraph DNS Zone
D(Private DNS Zone)
end
subgraph Ingress & Load Balancing
E(Application Gateway) --> F{Azure Firewall}
F --> G(Load Balancer)
G --> H(Nginx Ingress Controller)
H --> I(Kubernetes Service)
I --> J(Kubernetes Pods)
J --> K[Microservice]
end
subgraph Kubernetes Cluster
I -- "Kubernetes Service" --> J
end
B --> D{Private DNS}
D --> E
end
Explanation and Key Components:
Traffic Flow:
Key Considerations and Potential Enhancements:
An equivalent cloud-agnostic version would something like below: graph TD
%% External Components
Client[External User] -->|Internet| PublicDNS[Public DNS]
PublicDNS -->|DNS Resolution| PublicIP[Public IP Address]
PublicIP -->|HTTPS| AppGW[API Gateway/Reverse Proxy]
%% Network Security
AppGW -->|Traffic Filtering| Firewall[Network Firewall]
subgraph "Private Virtual Network"
%% Private DNS
subgraph "Private DNS Zone"
PrivDNS[Private DNS Records]
end
%% Access & Security Layer
Firewall -->|Allowed Traffic| LB[Load Balancer]
%% Kubernetes Cluster
subgraph "Kubernetes Cluster"
%% Ingress Layer
LB -->|Traffic Routing| Ingress[Ingress Controller]
%% Kubernetes Services
Ingress -->|Service Discovery| K8sService[Kubernetes Service]
%% Microservice Pods
K8sService -->|Load Balancing| Pod1[Microservice Pod 1]
K8sService -->|Load Balancing| Pod2[Microservice Pod 2]
K8sService -->|Load Balancing| Pod3[Microservice Pod 3]
end
%% Internal Service Discovery
PrivDNS -.->|Service Resolution| K8sService
end
|
Beta Was this translation helpful? Give feedback.
-
|
Building on this earlier discussion, below I've outlined the current implementation we've taken for public HTTP/s ingress into a K8TRE cluster env (where where we have clusters for prod, stg and dev). This includes the prereqs on the cluster configuration, ingress route from an external/shared public gateway (defined in the underlying infrastructure) to an internal cluster gateway (managed by Cilium Gateway operator) plus the public/private DNS resolution at each step. Cluster/Argo PrerequistiesCilium CNI PluginThe k8s cluster whether AKS, EKS or K3s etc must utilise cilium as the primary CNI for L4/7 routing and in future for managing routes with cilium network policies. In particular, the Gateway API must be enabled in cilium with cilium set to replace kube-proxy e.g. If K8TRE argocd label In addition if skip-cilium is false, argocd must install core Gateway API CRDs before installing Cilium outlined here e.g. YAML: ExternalDNSExternalDNS should also be installed via K8TRE argoCD. In the case of K8TRE on Azure, ExternalDNS is configured with federated identity management with the underlying infrastructure providing a private DNS zone per cluster that the ExternalDNS service can talk to to manage DNS dynamically. Internal K8TRE GatewayThe K8TRE ArgoCD must initialise a 'Gateway' instance of the cilium GatewayClass e.g.: apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: internal-gateway
namespace: gateway-system
labels:
app.kubernetes.io/name: cluster-ingress
app.kubernetes.io/component: gateway
spec:
gatewayClassName: cilium
listeners:
- name: http
port: 80
protocol: HTTP
- name: https
port: 443
protocol: HTTPSThis will create the internal Gateway along with a service object. The service creation is automated and will be named annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true",
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "snet-clusteringressservices",
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz",
external-dns.alpha.kubernetes.io/hostname: "gw.k8tre.internal"For AWS: annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-0123456789abcdef0"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/healthz"
external-dns.alpha.kubernetes.io/hostname: "gw.k8tre.internal"HTTP RoutesRouting to internal services that require public access can be defined with HTTPRoute definitions that associate the route to a defined Gateway ( apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: jupyter-route
namespace: gateway-system
spec:
parentRefs:
- name: internal-gateway
namespace: gateway-system
hostnames:
- "jupyter.k8tre.internal"
rules:
- backendRefs:
- name: jupyterhub
port: 80For K8TRE this means we use Cilium Gateway API as a cloud agnostic implementation of a cluster Gateway with the constraint that the service attached to the gateway defined must have a ingress/egress route out to the internet using a internal/private load balancer provided by the underlying infrastructure. From there, it is upto the operator of K8TRE to decide how they plugin in a external gateway to route traffic to the internal gateway (internal/private load balancer). Certain external cloud gateways (Azure App Gateway) that will likely integrate with K8TRE's internal gateway will require a path-orietnated route that provides health probes a means to ping the health of the gateway as a backend target. Therefore, a route definition similar to the following will be required. apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: gw-health-probe-route
namespace: gateway-system
spec:
parentRefs:
- name: internal-gateway
namespace: gateway-system
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: gw-echo
port: 80Gateway Ingress on Azure-Provisioned K8TREFor a Azure deployment of K8TRE we use Azure Application Gateway v2 (AppGW) as the external gateway to route HTTP/S traffic into a hub and spoke based infrastructure where the AppGW sits on the hub as a shared resource and routes traffic to each internal/private K8TRE cluster (prod,stg,dev) deployed to its own spoke. Our approach to AppGW firewall integration follows the architecture outlined here. Through the use of public and private DNS zones (managed by ExternalDNS running in the clusters) traffic is routed to the Note, LTH brands K8TRE as KARECTL. Operators will need to establish their own public facing domain and map hostnames for K8TRE defined services (i.e. gw.k8tre.internal for Internal Gateway). |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
There are two DNS providers to potentially discuss, in-cluster, and external to the cluster. Can we limit the discussion to just in-cluster DNS, or do we need to consider external DNS too?
Beta Was this translation helpful? Give feedback.
All reactions