Problem
When a NATS server is deployed behind an L4 reverse proxy and clients connect via WebSocket using mTLS, the proxy forwards the original client IP using the PROXY protocol. However, NATS only supports PROXY protocol parsing on plain TCP listeners, not on WebSocket listeners. As a result, the real client IP is lost, and all proxied WebSocket connections appear to originate from the proxy itself (e.g., 127.0.0.1). This makes it impossible to use client IP information in authentication callouts to distinguish between internal services and external devices.
Concretely:
- All external connections appear to originate from the proxy address, the auth callout service cannot distinguish factory devices from internal services.
- Authentication callout decisions become incorrect, an external device connecting via proxy receives the same
client_info.host as a co-located internal service, potentially receiving elevated permissions.
- Audit logs are misleading, every external connection is logged with the proxy's address, making it impossible to trace activity back to a specific device.
Proposed change
Extend existing PROXY protocol handling to WebSocket connections so that the NATS server can parse and use PROXY protocol headers for clients connected via WebSocket, analogous to current TCP behavior.
Currently, TCP connections support PROXY protocol v1/v2 via readProxyProtoHeader() in client_proxyproto.go, which is invoked during client creation in createClient().
The proposed change:
- Detect and parse PROXY protocol v1/v2 on the raw TCP connection before the HTTP/WebSocket upgrade handshake
- Populate
client.host and client.port from the PROXY header, consistent with TCP client behavior
- Ensure the forwarded address is available in authentication callout requests (
client_info)
Scenario
In an industrial IoT / smart factory deployment, a central control server sits on the factory edge and orchestrates the smart factory. This server runs multiple services that must share a single external-facing port (443/TLS). A reverse proxy routes incoming connections to the correct backend:
- A NATS leaf server (WebSocket listener) that connects factory devices to a central NATS cluster in the cloud
- Additional HTTP services (REST APIs, dashboards, etc.)
Because all services share port 443, the reverse proxy is the single ingress point. For NATS WebSocket traffic, the proxy operates as an L4 TCP passthrough, it does not terminate TLS. Instead, the raw TCP stream is forwarded to the NATS server, which performs TLS/mTLS termination itself. This is required because the NATS server needs to verify client certificates directly for mTLS authentication. The proxy prepends a PROXY protocol header to forward the original client IP before passing the stream through.
For other HTTP services (REST APIs, dashboards), the proxy terminates TLS normally at L7.
Factory devices connect to the NATS leaf server over WebSocket with mTLS. The NATS server uses authentication callouts to authorize connections, the callout service distinguishes between internal connections and external connections based on the client's source IP. Internal services receive broader permissions, while external factory devices are restricted to specific subjects only.
Contribution
I'm willing to implement this change following the existing TCP implementation pattern, reading and parsing the PROXY protocol header on the raw net.Conn before passing it to the WebSocket HTTP upgrade handler in wsUpgrade().
Related issues i could also look into: SRV-309
Problem
When a NATS server is deployed behind an L4 reverse proxy and clients connect via WebSocket using mTLS, the proxy forwards the original client IP using the PROXY protocol. However, NATS only supports PROXY protocol parsing on plain TCP listeners, not on WebSocket listeners. As a result, the real client IP is lost, and all proxied WebSocket connections appear to originate from the proxy itself (e.g.,
127.0.0.1). This makes it impossible to use client IP information in authentication callouts to distinguish between internal services and external devices.Concretely:
client_info.hostas a co-located internal service, potentially receiving elevated permissions.Proposed change
Extend existing PROXY protocol handling to WebSocket connections so that the NATS server can parse and use PROXY protocol headers for clients connected via WebSocket, analogous to current TCP behavior.
Currently, TCP connections support PROXY protocol v1/v2 via
readProxyProtoHeader()inclient_proxyproto.go, which is invoked during client creation increateClient().The proposed change:
client.hostandclient.portfrom the PROXY header, consistent with TCP client behaviorclient_info)Scenario
In an industrial IoT / smart factory deployment, a central control server sits on the factory edge and orchestrates the smart factory. This server runs multiple services that must share a single external-facing port (443/TLS). A reverse proxy routes incoming connections to the correct backend:
Because all services share port 443, the reverse proxy is the single ingress point. For NATS WebSocket traffic, the proxy operates as an L4 TCP passthrough, it does not terminate TLS. Instead, the raw TCP stream is forwarded to the NATS server, which performs TLS/mTLS termination itself. This is required because the NATS server needs to verify client certificates directly for mTLS authentication. The proxy prepends a PROXY protocol header to forward the original client IP before passing the stream through.
For other HTTP services (REST APIs, dashboards), the proxy terminates TLS normally at L7.
Factory devices connect to the NATS leaf server over WebSocket with mTLS. The NATS server uses authentication callouts to authorize connections, the callout service distinguishes between internal connections and external connections based on the client's source IP. Internal services receive broader permissions, while external factory devices are restricted to specific subjects only.
Contribution
I'm willing to implement this change following the existing TCP implementation pattern, reading and parsing the PROXY protocol header on the raw
net.Connbefore passing it to the WebSocket HTTP upgrade handler inwsUpgrade().Related issues i could also look into: SRV-309