Description
What happened:
Making an HTTP request to the HTTPS port of the controller generates a log containing the sNAT ip of the node serving the request on its NodePort, instead of the real ip coming from the proxy protocol header.
What you expected to happen:
The log generated by an HTTP request directed to the HTTPS port of the controller should have the real ip of the client gathered from the proxy protocol header.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.10.1
Build: 4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.3
Kubernetes version (use kubectl version
): v1.29.8-eks-a737599
Environment:
-
Cloud provider or hardware configuration: AWS
-
OS (e.g. from /etc/os-release):
cpe:2.3:o:amazon:amazon_linux:2
-
Kernel (e.g.
uname -a
):5.10.215-203.850.amzn2.x86_64
-
Install tools: AWS EKS
-
How was the ingress-nginx-controller installed:
controller:
addHeaders:
X-Request-Id: $req_id
admissionWebhooks:
enabled: false
timeoutSeconds: 30
allowSnippetAnnotations: true
autoscaling:
enabled: false
maxReplicas: 8
minReplicas: 2
config:
block-cidrs: <REDACTED>
global-rate-limit-memcached-host: memcached.ingress.svc.cluster.local
global-rate-limit-memcached-port: 11211
hide-headers: x-powered-by,server,via
hsts-include-subdomains: "false"
http-redirect-code: "301"
http-snippet: |-
map $http_host $xappid {
hostnames;
default unknown_locale;
<REDACTED>
}
map $http_user_agent $xuadevice {
default desktop;
<REDACTED>
}
log-format-upstream: $remote_addr "$http_host" $upstream_http_x_user_id $upstream_http_x_impersonator_user_id
[$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"
$request_length $request_time [$proxy_upstream_name]
proxy-buffer-size: 8k
server-tokens: "false"
use-gzip: "true"
use-proxy-protocol: "true"
enableAnnotationValidations: true
extraArgs:
update-status: "true"
extraEnvs:
- name: DD_ENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/env']
- name: DD_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/service']
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
ingressClass: ingress-testing
ingressClassResource:
default: true
enabled: true
name: ingress-testing
labels:
tags.datadoghq.com/env: t0-testing
tags.datadoghq.com/service: ingress-testing
metrics:
enabled: true
serviceMonitor:
enabled: true
podAnnotations:
ad.datadoghq.com/controller.checks: |
{
"nginx_ingress_controller": {
"init_config": {},
"instances": [{"prometheus_url": "http://%%host%%:10254/metrics"}]
}
}
podLabels:
tags.datadoghq.com/env: t0-testing
tags.datadoghq.com/service: ingress-testing
proxySetHeaders:
X-Request-Id: $req_id
X-UA-Device: $xuadevice
publishService:
enabled: true
replicaCount: 8
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 60
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
defaultBackend:
autoscaling:
enabled: true
enabled: false
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 25m
memory: 150Mi
- Current State of the controller:
Name: ingress-testing
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-testing
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=ingress-nginx-4.10.1
tags.datadoghq.com/env=t0-testing
tags.datadoghq.com/service=ingress-testing
Annotations: argocd.argoproj.io/tracking-id: ingress-testing-t0-testing:networking.k8s.io/IngressClass:ingress/ingress-testing
ingressclass.kubernetes.io/is-default-class: true
meta.helm.sh/release-name: ingress-testing
meta.helm.sh/release-namespace: ingress
Controller: k8s.io/ingress-nginx
Events: <none>
-------------------------------------------------------
Name: ingress-testing-ingress-nginx-controller
Namespace: ingress
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-testing
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=ingress-nginx-4.10.1
Annotations: argocd.argoproj.io/tracking-id: ingress-testing-t0-testing:/Service:ingress/ingress-testing-ingress-nginx-controller
meta.helm.sh/release-name: ingress-testing
meta.helm.sh/release-namespace: ingress
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 60
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-testing,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: <REDACTED>
IPs: <REDACTED>
LoadBalancer Ingress: <REDACTED>.eu-west-3.elb.amazonaws.com
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31534/TCP
Endpoints: <REDACTED>
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32213/TCP
Endpoints: <REDACTED>
Session Affinity: None
External Traffic Policy: Cluster
Internal Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedLoadBalancer 26s (x100 over 18h) service-controller Updated load balancer with new hosts
How to reproduce this issue:
- In an AWS EKS cluster, deploy the ingress controller with the values provided above
- Send an HTTP request to the HTTPs port of the CSP-provided Classical Load Balancer
- The logs should show as
clientip
the sNAT IP of the Nodeport node serving the request
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
No status