What happened:
According to AWS documentation:
By default, when a pod communicates to any IPv4 address that isn’t within a CIDR block that’s associated to your VPC, the VPC CNI translates the pod’s IPv4 address to the primary private IPv4 address of the primary ENI of the node that the pod is running on
Setup:
• EKS Cluster: Running in VPC A with custom networking enabled.
• Pod IP Allocation: Uses a secondary CIDR block in VPC A.
• RDS Instance: Deployed in VPC B.
• Connectivity Between VPCs: Established via AWS Transit Gateway (TGW).
• Security Groups: RDS security group initially allows ingress only from the node IP range (Primary ENI).
Observation:
• The connection fails from pod to database unless the secondary CIDR block (pod IP range) is explicitly whitelisted (Ingress) in the RDS security group.
Question:
• Is this the expected behavior for custom networking in Amazon VPC CNI?
• How can we ensure pod IPs are NAT’d to node IPs for egress traffic?
Environment:
What happened:
According to AWS documentation:
Setup:
• EKS Cluster: Running in VPC A with custom networking enabled.
• Pod IP Allocation: Uses a secondary CIDR block in VPC A.
• RDS Instance: Deployed in VPC B.
• Connectivity Between VPCs: Established via AWS Transit Gateway (TGW).
• Security Groups: RDS security group initially allows ingress only from the node IP range (Primary ENI).
Observation:
• The connection fails from pod to database unless the secondary CIDR block (pod IP range) is explicitly whitelisted (Ingress) in the RDS security group.
Question:
• Is this the expected behavior for custom networking in Amazon VPC CNI?
• How can we ensure pod IPs are NAT’d to node IPs for egress traffic?
Environment: