What happened?
I updated some annotations in an ingress, such as ingress.pomerium.io/policyor ingress.pomerium.io/name, pomerium all-in-one setup didn't propagate the changes to all it's components.
- ✅ The following record appears in pomerium pod's logs:
{"level":"info","ts":"2025-11-10T11:44:59Z","msg":"new pomerium config applied","controller":"pomerium-ingress","controllerGroup":"networking.k8s.io","controllerKind":"Ingress","Ingress":{"name":"test-nginx-pomerium-ingress","namespace":"default"},"namespace":"default","name":"test-nginx-pomerium-ingress","reconcileID":"1fc471ab-a527-4653-8e9e-f028ca39b6e7"}
- ✅ The new config is written to
pomerium.record_changes table in postgres storage
- ❌ The name changes are not visible at {authenticate.url}/.pomerium/routes#subpage=Routes
- ❌ The policy changes are not effective when I access the resource
If I restart the deployment, the changes get applied and everything works correctly.
What did you expect to happen?
The changes are applied without having to restart the pomerium deployment.
How'd it happen?
- Installed an EKS cluster v1.33 (I have not tried to reproduce it with other k8s installations)
- Installed cert-manager, external-dns
- installed pomerium-ingress v0.31.1
- Created a test ingress resource
- Checked that ingress worked. Didn't work ❌
- Restarted pomerium deployment (initial creation of an ingress is not propagated automatically either)
- Checked that ingress worked. Worked ✅
- Updated the ingress
ingress.pomerium.io/policyand ingress.pomerium.io/name annotations
- Checked that ingress updates applied. Didn't work ❌
- Restarted pomerium deployment
- Checked that ingress updates applied. Worked ✅
What's your environment like?
image: pomerium/ingress-controller:main
args:
- all-in-one
- '--pomerium-config=global'
- '--update-status-from-service=$(POMERIUM_NAMESPACE)/pomerium-proxy'
- '--metrics-bind-address=$(POD_IP):9090'
- '--health-probe-bind-address=$(POD_IP):28080'
What's your config.yaml?
authenticate:
url: https://auth.my-domain.com
certificates:
- pomerium/authenticate-tls
identityProvider:
provider: google
secret: pomerium/idp-google
passIdentityHeaders: true
secrets: pomerium/bootstrap
storage:
postgres:
secret: pomerium/postgres-secret
What did you see in the logs?
{"level":"info","ts":"2025-11-10T11:44:59Z","msg":"new pomerium config applied","controller":"pomerium-ingress","controllerGroup":"networking.k8s.io","controllerKind":"Ingress","Ingress":{"name":"test-nginx-pomerium-ingress","namespace":"default"},"namespace":"default","name":"test-nginx-pomerium-ingress","reconcileID":"1fc471ab-a527-4653-8e9e-f028ca39b6e7"}
Additional context
# pomerium-proxy.yaml
apiVersion: v1
kind: Service
metadata:
name: pomerium-proxy
namespace: pomerium
uid: 91682422-abe1-46d8-87f9-13d23c61f4bd
resourceVersion: '36380673'
creationTimestamp: '2025-11-10T08:02:17Z'
labels:
app.kubernetes.io/component: proxy
app.kubernetes.io/name: pomerium
annotations:
external-dns.alpha.kubernetes.io/hostname: auth.my-domain.com
service.beta.kubernetes.io/aws-load-balancer-enable-tcp-udp-listener: 'true'
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
finalizers:
- service.kubernetes.io/load-balancer-cleanup
- service.k8s.aws/resources
selfLink: /api/v1/namespaces/pomerium/services/pomerium-proxy
status:
loadBalancer:
ingress:
- hostname: >-
k8s-pomerium-pomerium-*masked*.elb.us-east-1.amazonaws.com
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: https
nodePort: 30232
- name: quic
protocol: UDP
port: 443
targetPort: quic
nodePort: 32534
- name: http
protocol: TCP
port: 80
targetPort: http
nodePort: 31022
selector:
app.kubernetes.io/component: proxy
app.kubernetes.io/name: pomerium
clusterIP: *masked*
clusterIPs:
- *masked*
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
# test-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-pomerium-ingress
namespace: default
uid: cb4628d1-f0a7-4059-a199-6b506cbe0a6d
resourceVersion: '36433919'
generation: 2
creationTimestamp: '2025-11-10T11:11:43Z'
labels:
k8slens-edit-resource-version: v1
annotations:
ingress.pomerium.io/name: Ingress Test Page Updated 4
ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"my-domain.com"}}]}}]'
kubernetes.io/tls-acme: 'true'
selfLink: >-
/apis/networking.k8s.io/v1/namespaces/default/ingresses/test-nginx-pomerium-ingress
status:
loadBalancer:
ingress:
- hostname: >-
k8s-pomerium-pomerium-*masked*.elb.us-east-1.amazonaws.com
spec:
ingressClassName: pomerium
tls:
- hosts:
- pomerium-ingress-test.my-domain.com
secretName: pomerium-ingress-test-tls
rules:
- host: pomerium-ingress-test.my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-nginx-service
port:
number: 80
What happened?
I updated some annotations in an ingress, such as
ingress.pomerium.io/policyoringress.pomerium.io/name, pomerium all-in-one setup didn't propagate the changes to all it's components.pomerium.record_changestable in postgres storageIf I restart the deployment, the changes get applied and everything works correctly.
What did you expect to happen?
The changes are applied without having to restart the pomerium deployment.
How'd it happen?
ingress.pomerium.io/policyandingress.pomerium.io/nameannotationsWhat's your environment like?
What's your config.yaml?
What did you see in the logs?
Additional context