ingress.class not propagated #277
Description
First off, I'm not sure which of these things are cause and effect, or if they are unrelated, so let me know if I should split this issue or rephrase it. Also, this is a (somewhat rambling) summary of a discussion in #kube-lego on the k8s Slack, I hope I didn't miss any context.
Edit: I solved the latter half of this issue by fixing my ingress configuration, but I'll keep the original description for now. Back to pre-edit
I had problems when trying out lego on my test-case today, replacing self-signed certs on our monitoring (grafana/kibana/etc) ingress point. To test, I have a fairly empty cluster. Here's my setup:
- Kubernetes 1.7.7 (Azure ACS)
- Nginx Ingress Controller (0.9.0-beta.17, installed via Helm), with custom ingress.class,
kubernetes.io/ingress.class: monitoring
, deployed to namespacemonitoring
. - Four services in the
monitoring
namespace, and an ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitoring-web
annotations:
kubernetes.io/ingress.class: "monitoring"
kubernetes.io/ingress.provider: "nginx"
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/rewrite-target: "/"
ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
ingress.kubernetes.io/auth-signin: "https://$host/oauth2/sign_in"
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-server
servicePort: 9090
- http:
paths:
- path: /alertmanager
backend:
serviceName: prometheus-alertmanager
servicePort: 80
- http:
paths:
- path: /grafana
backend:
serviceName: grafana-grafana
servicePort: 3000
- http:
paths:
- path: /kibana
backend:
serviceName: kibana
servicePort: 5601
tls:
- hosts:
- monitoring.example.com
secretName: monitoring-ingress-tls
So, up until this works fine with the self-signed setup. I then deployed kube-lego (to the kube-system namespace) using helm with the stable chart. However, it does not accept my "monitoring" class, so my first step is to upgrade to the canary
tag (after reading some issue or post on Slack).
Adding LEGO_SUPPORTED_INGRESS_CLASS: "monitoring"
to values.yaml, I can now start getting 404:s in the reachability tests. Looking at the generated ingress resource, it has not set the correct ingress.class
, and is setting it to nginx
instead. Bug or misconfiguration?
(Setting LEGO_DEFAULT_INGRESS_CLASS: "monitoring" LEGO_DEFAULT_INGRESS_PROVIDER: "nginx"
"works", but from the docs I would have expected the default-values to be used only when I hadn't annotated the target ingresses myself? As an additional confusion, having default provider set to default class strikes me as backwards, shouldn't it be the other way around?)
Moving on, when I get the correct class and provider, I then start getting my certificates generated. I can read the certificates with kubectl and verify that they are indeed correctly generated from Let's Encrypt. However, my ingresses don't use them. Connecting to the ingress, I still get the Kubernetes Ingress Controller Fake Certificate
. I've tried recreating the ingress, restarting the controller, recreating and restarting kube-lego... I can't figure out how it can use the wrong cert. I've tried setting the defaultSSLCertificate
on the nginx-controller chart to point to the cert generated by kube-lego. Doesn't seem to make any difference.
There are no suspicious entries in the controller logs, and kube-lego also seems happy.