Description
What happened:
We created ingress and ingress-canary with different upstream services. Scaling deployment to 0, which is used for main ingress, cause 503 (Service Unavailable) for canary requests.
What you expected to happen:
Requests are routed to canary without error.
NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version
):
NGINX Ingress controller
Release: v1.11.2
Build: 46e76e5
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
Kubernetes version (use kubectl version
):
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1
Environment:
-
Cloud provider or hardware configuration: yandex cloud
-
OS (e.g. from /etc/os-release): Node: "Ubuntu 20.04.6 LTS", Container: "Alpine Linux v3.20"
-
Kernel (e.g.
uname -a
): 5.4.0-187-generic -
Install tools:
- Managed k8s, yandex cloud
-
How was the ingress-nginx-controller installed: helm
- ingress-nginx-4.11.2 1.11.2
-
Current State of the controller: working fine
How to reproduce this issue:
We have two deployments and services in cluster, one is the main release, second one - canary.
Create two ingresses, main:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-main
labels:
app: my-app-main
annotations:
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
ingressClassName: nginx
tls:
- hosts:
- my-app.k8s.domain.ru
secretName: my-secret-name
rules:
- host: my-app.k8s.domain.ru
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-main
port:
number: 4772
and canary:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-canary
labels:
app: my-app-canary
annotations:
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/canary: 'true'
nginx.ingress.kubernetes.io/canary-by-header: x-request-canary
nginx.ingress.kubernetes.io/canary-by-header-value: my-app-canary
spec:
ingressClassName: nginx
tls:
- hosts:
- my-app.k8s.domain.ru
secretName: my-secret-name
rules:
- host: my-app.k8s.domain.ru
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-canary
port:
number: 4772
both services has endpoints:
kubectl describe svc my-app-main
Name: my-app-main
Selector: app=my-app-main
....
Endpoints: 10.112.128.17:4772
Events: <none>
kubectl describe svc my-app-canary
Name: my-app-canary
Selector: app=my-app-canary
....
Endpoints: 10.112.145.155:4772
Events: <none>
Test with grpcurl, it returns as expected:
grpcurl -H 'x-request-canary: my-app-canary' -d '{"ids": "123"}' -proto my.proto my-app.k8s.domain.ru:443 default.name.my.App/GetCountry
{
"countries": [
{
"id": "123",
"code": "SomeCode",
"name": "SomeName"
}
]
}
It can be seen In ingress-controller logs that upstream pod address equals to my-app-canary endpoint (so routing works).
Then scale my-app-main deployment to 0:
kubectl scale --replicas=0 deployment/my-app-main
Canary deployment remains untouched.
Repeat grpcurl request, it returns:
grpcurl -H 'x-request-canary: my-app-canary' -d '{"ids": "123"}' -proto my.proto my-app.k8s.domain.ru:443 default.name.my.App/GetCountry
ERROR:
Code: Unavailable
Message: unexpected HTTP status code received from server: 503 (Service Unavailable); transport: received unexpected content-type "text/html"
Anything else we need to know:
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Done