Nginx WebSocket Proxy Configuration
Nginx WebSocket proxy configuration requires specific directives that differ from standard HTTP proxying. While Nginx excels as a reverse proxy for HTTP traffic, WebSocket connections need special handling for the protocol upgrade mechanism, persistent connections, and long-lived bidirectional communication channels. Without proper configuration, WebSocket handshakes will fail with 400 or 502 errors, leaving clients unable to establish real-time connections to your backend services.
Why WebSocket Proxying Requires Special Configuration
Standard HTTP reverse proxy configurations handle short-lived request-response cycles. Nginx receives a request, forwards it to a backend server, receives a response, and closes the connection. WebSockets operate fundamentally differently. They begin with an HTTP upgrade request, transition to a persistent bidirectional connection, and maintain that connection for extended periods—sometimes hours or days.
The WebSocket handshake uses specific HTTP headers that must be properly forwarded by the proxy. The client sends Upgrade: websocket and Connection: Upgrade headers, and the server must respond with matching headers to complete the protocol switch. If Nginx strips or modifies these headers, the handshake fails. Additionally, WebSocket connections remain open indefinitely, requiring timeout configurations that differ from typical HTTP proxy settings.
Production deployments often place Nginx in front of WebSocket servers for several reasons: SSL/TLS termination, load balancing across multiple backend instances, centralized access control, and unified logging. Proper configuration ensures these benefits without breaking the WebSocket protocol.
Basic Nginx WebSocket Proxy Configuration
The minimal nginx websockets configuration requires three key directives: proxy_pass to forward traffic, and two header directives to handle the protocol upgrade. Here’s a working example:
server {
listen 80;
server_name ws.example.com;
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
This configuration listens on port 80 and forwards any requests to /ws to a WebSocket server running on localhost port 3000. The proxy_http_version 1.1 directive is critical because WebSocket requires HTTP/1.1 or higher—Nginx defaults to HTTP/1.0 for proxying.
For applications where all traffic should be treated as potential WebSocket connections, you can configure the root location:
server {
listen 80;
server_name ws.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The additional headers preserve client information that your backend application might need. X-Real-IP contains the original client IP address, X-Forwarded-For includes the full proxy chain, and X-Forwarded-Proto indicates whether the client used HTTP or HTTPS.
Understanding Upgrade and Connection Headers
The Upgrade and Connection headers form the core of the WebSocket handshake mechanism. When a client initiates a WebSocket connection, it sends an HTTP request with Upgrade: websocket and Connection: Upgrade headers. The server must echo these headers back with a 101 Switching Protocols status code.
Nginx uses variables to dynamically set these headers based on the incoming request:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
The $http_upgrade variable captures the value of the client’s Upgrade header. By using this variable rather than a hardcoded value, the configuration works for both WebSocket requests and regular HTTP requests to the same location.
For more sophisticated setups handling mixed traffic, you can use a map directive to set the Connection header conditionally:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
This map directive sets $connection_upgrade to “upgrade” when the client sends an Upgrade header, and “close” for regular HTTP requests. This allows the same Nginx location to handle both WebSocket and HTTP traffic correctly, with proper connection management for each protocol type.
Timeout Configuration for WebSocket Connections
Default Nginx timeout values are designed for HTTP request-response cycles and will prematurely terminate WebSocket connections. The most critical timeout is proxy_read_timeout, which defaults to 60 seconds. This timeout applies when Nginx is waiting to read data from the proxied server—exactly what happens during idle WebSocket connections.
Configure timeouts explicitly for WebSocket locations:
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
Setting timeouts to 7 days effectively makes them unlimited for most use cases. The three timeout directives serve different purposes:
proxy_connect_timeout: Maximum time for establishing a connection to the backend serverproxy_send_timeout: Maximum time between successive write operations to the backendproxy_read_timeout: Maximum time between successive read operations from the backend
For WebSocket connections, proxy_read_timeout is most critical because idle connections—where neither client nor server sends data—are common. If your application uses heartbeat or ping/pong frames at regular intervals, you can set timeouts slightly longer than your heartbeat interval rather than using extremely long values.
Some deployments use more conservative timeout values with application-level keepalives:
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
proxy_buffering off;
}
The proxy_buffering off directive disables Nginx’s response buffering, ensuring messages are forwarded immediately between client and server rather than being buffered. This reduces latency for real-time applications.
TLS Termination for Secure WebSocket Connections
Production WebSocket deployments should use WSS (WebSocket Secure), the encrypted version of the protocol. The nginx wss configuration typically handles TLS termination at the Nginx layer, communicating with backend WebSocket servers over unencrypted connections within a trusted network.
Here’s a complete nginx wss configuration:
server {
listen 443 ssl http2;
server_name ws.example.com;
ssl_certificate /etc/nginx/ssl/ws.example.com.crt;
ssl_certificate_key /etc/nginx/ssl/ws.example.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
server {
listen 80;
server_name ws.example.com;
return 301 https://$server_name$request_uri;
}
This configuration listens on port 443 with SSL enabled, using modern TLS protocols and strong cipher suites. The second server block redirects all HTTP traffic to HTTPS, ensuring clients always use encrypted connections.
The X-Forwarded-Proto header is set to https explicitly, informing the backend application that the original client connection was encrypted. This is important for applications that need to know the original protocol for security decisions or URL generation.
For Let’s Encrypt certificates with automatic renewal, the configuration needs an additional location for the ACME challenge:
server {
listen 80;
server_name ws.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
Load Balancing WebSocket Connections
WebSocket scalability often requires distributing connections across multiple backend servers. Nginx supports several load balancing methods, but WebSocket applications require special consideration for sticky sessions when using stateful backends.
Basic load balancing uses an upstream block:
upstream websocket_backend {
server backend1.example.com:3000;
server backend2.example.com:3000;
server backend3.example.com:3000;
}
server {
listen 80;
server_name ws.example.com;
location /ws {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
By default, Nginx uses round-robin load balancing. For stateful WebSocket applications where the backend server maintains connection-specific state, you need sticky sessions. Nginx Plus provides a commercial sticky directive, but the open-source version can achieve IP-based stickiness using the ip_hash method:
upstream websocket_backend {
ip_hash;
server backend1.example.com:3000;
server backend2.example.com:3000;
server backend3.example.com:3000;
}
The ip_hash directive ensures all connections from the same client IP address go to the same backend server. This works well for most deployments but can cause uneven distribution if many clients share the same IP address (corporate NAT, mobile carriers).
For better distribution with stickiness, use the hash directive with a custom key:
upstream websocket_backend {
hash $remote_addr$http_x_forwarded_for consistent;
server backend1.example.com:3000;
server backend2.example.com:3000;
server backend3.example.com:3000;
}
You can also configure backend server weights, maximum connections, and health checks:
upstream websocket_backend {
least_conn;
server backend1.example.com:3000 max_conns=1000;
server backend2.example.com:3000 max_conns=1000;
server backend3.example.com:3000 max_conns=500 weight=1;
}
The least_conn method routes new connections to the server with the fewest active connections, which works well for WebSocket applications with varying connection lifespans. The max_conns parameter prevents any single backend from being overloaded.
Health Checks and Failover
Passive health checks monitor backend availability based on actual client request failures. When Nginx fails to connect to a backend or receives errors, it marks the server as unavailable and stops sending traffic:
upstream websocket_backend {
server backend1.example.com:3000 max_fails=3 fail_timeout=30s;
server backend2.example.com:3000 max_fails=3 fail_timeout=30s;
server backend3.example.com:3000 backup;
}
The max_fails parameter defines how many failed connection attempts trigger the unavailable state. The fail_timeout parameter sets both how long the server remains marked unavailable and the time window for counting failures. The backup parameter marks a server that only receives traffic when all primary servers are unavailable.
Active health checks require Nginx Plus, but you can implement a workaround using a separate health check endpoint and external monitoring that removes failed backends from the configuration.
Common WebSocket Proxy Issues
Connection drops with 502 Bad Gateway errors typically indicate timeout problems or backend server unavailability. Enable detailed error logging to diagnose:
error_log /var/log/nginx/ws_error.log debug;
server {
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
access_log /var/log/nginx/ws_access.log;
}
}
Check the error log for messages like “upstream timed out” (increase proxy_read_timeout), “connection refused” (backend server not running), or “no live upstreams” (all backend servers marked down).
400 Bad Request errors during handshake usually indicate missing or incorrect header configuration. Verify that proxy_http_version 1.1 is set and that Upgrade and Connection headers are properly configured. Use browser developer tools to inspect the actual headers being sent and received.
Unexpected connection closures after successful handshake often result from intermediate proxies or firewalls between client and Nginx. Some corporate proxies don’t support WebSocket protocol, and some firewalls close long-lived connections. Implementing application-level ping/pong frames can detect dead connections and maintain connection liveness through intermediate network devices.
Memory consumption grows when many WebSocket connections accumulate. Each connection consumes memory for buffers and connection state. Monitor Nginx worker processes and adjust worker_connections and worker_processes based on your expected connection count:
events {
worker_connections 10000;
}
worker_processes auto;
For very large deployments handling hundreds of thousands of concurrent connections, system-level tuning becomes necessary: increase file descriptor limits, adjust TCP buffer sizes, and enable kernel optimizations for high connection counts.
Apache WebSocket Proxy Configuration
The apache websocket proxy uses mod_proxy_wstunnel for WebSocket support. This module must be explicitly enabled and works alongside mod_proxy and mod_proxy_http.
Enable required modules:
a2enmod proxy
a2enmod proxy_http
a2enmod proxy_wstunnel
systemctl restart apache2
Basic apache proxypass websocket configuration:
<VirtualHost *:80>
ServerName ws.example.com
ProxyPreserveHost On
ProxyRequests Off
ProxyPass /ws ws://localhost:3000/ws
ProxyPassReverse /ws ws://localhost:3000/ws
</VirtualHost>
The ws:// scheme in ProxyPass tells Apache to use mod_proxy_wstunnel. For encrypted connections, use wss:// as the scheme.
Complete Apache configuration with TLS:
<VirtualHost *:443>
ServerName ws.example.com
SSLEngine on
SSLCertificateFile /etc/ssl/certs/ws.example.com.crt
SSLCertificateKeyFile /etc/ssl/private/ws.example.com.key
ProxyPreserveHost On
ProxyRequests Off
ProxyPass /ws ws://localhost:3000/ws
ProxyPassReverse /ws ws://localhost:3000/ws
ProxyTimeout 3600
</VirtualHost>
The ProxyTimeout directive sets the maximum time for backend communication, similar to Nginx’s proxy_read_timeout. Unlike Nginx, Apache doesn’t require separate header directives for the Upgrade mechanism—mod_proxy_wstunnel handles this automatically when using the ws:// or wss:// scheme.
HAProxy WebSocket Configuration
HAProxy websocket configuration uses the same basic syntax as HTTP proxying but requires specific timeout settings. HAProxy automatically handles the WebSocket upgrade mechanism without special directives:
frontend websocket_front
bind *:80
default_backend websocket_back
backend websocket_back
balance roundrobin
timeout connect 5s
timeout server 86400s
timeout tunnel 86400s
server ws1 localhost:3000 check
server ws2 localhost:3001 check
The timeout tunnel directive is critical for WebSocket connections—it specifies the maximum inactivity time for bidirectional connections after the protocol upgrade. The timeout server directive applies before the upgrade completes.
For haproxy websocket load balancing with sticky sessions:
backend websocket_back
balance roundrobin
cookie SERVERID insert indirect nocache
timeout tunnel 86400s
server ws1 localhost:3000 check cookie ws1
server ws2 localhost:3001 check cookie ws2
HAProxy’s cookie-based persistence works well for mixed HTTP and WebSocket traffic from the same clients.
TLS termination in HAProxy:
frontend websocket_front
bind *:443 ssl crt /etc/haproxy/certs/ws.example.com.pem
default_backend websocket_back
backend websocket_back
balance leastconn
timeout tunnel 86400s
server ws1 localhost:3000 check
The certificate file must contain both the certificate and private key in PEM format, concatenated into a single file.
Kubernetes Ingress Nginx WebSocket Configuration
The ingress nginx websocket setup requires specific annotations to enable WebSocket support. The Nginx Ingress Controller for Kubernetes needs additional configuration beyond standard Ingress resources:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: websocket-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
spec:
rules:
- host: ws.example.com
http:
paths:
- path: /ws
pathType: Prefix
backend:
service:
name: websocket-service
port:
number: 3000
The timeout annotations prevent connection drops. By default, the Nginx Ingress Controller handles WebSocket upgrade headers correctly, but timeout values default to 60 seconds and must be increased.
For applications requiring sticky sessions:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: websocket-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/proxy-read-timeout: "86400"
nginx.ingress.kubernetes.io/proxy-send-timeout: "86400"
spec:
rules:
- host: ws.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: websocket-service
port:
number: 3000
The affinity annotations configure cookie-based session persistence, ensuring WebSocket connections from the same client consistently route to the same backend pod.
Envoy WebSocket Proxy
Envoy proxy handles WebSocket connections without special configuration when using HTTP/1.1. The default HTTP connection manager automatically detects and processes WebSocket upgrade requests. A minimal envoy websocket configuration looks like this:
static_resources:
listeners:
- name: websocket_listener
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: websocket_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: websocket_cluster
http_filters:
- name: envoy.filters.http.router
clusters:
- name: websocket_cluster
connect_timeout: 5s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: websocket_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: localhost
port_value: 3000
Envoy’s automatic WebSocket handling makes it simpler to configure than Nginx for this use case, though Nginx remains more widely deployed and documented for WebSocket proxying scenarios.
Frequently Asked Questions
Why does my WebSocket connection return 400 Bad Request?
A 400 error during WebSocket handshake indicates the server rejected the upgrade request due to missing or invalid headers. Verify your Nginx configuration includes proxy_http_version 1.1 and correctly sets the Upgrade and Connection headers using proxy_set_header Upgrade $http_upgrade and proxy_set_header Connection "upgrade". Check that your backend WebSocket server is running and listening on the correct port. Use browser developer tools to inspect the actual request headers and ensure the client is sending valid WebSocket handshake headers.
How do I prevent WebSocket connections from timing out?
WebSocket timeout issues stem from proxy_read_timeout being too short for idle connections. Set proxy_read_timeout, proxy_send_timeout, and proxy_connect_timeout to values longer than your maximum expected idle period, typically 3600 seconds (1 hour) to 86400 seconds (24 hours). Alternatively, implement application-level heartbeat messages at regular intervals and set timeouts slightly longer than your heartbeat frequency. Disable response buffering with proxy_buffering off to ensure immediate message delivery.
Can Nginx load balance WebSocket connections across multiple servers?
Yes, Nginx supports WebSocket load balancing using upstream blocks. For stateless applications, use round-robin or least_conn load balancing methods. For stateful applications where backend servers maintain connection-specific data, implement sticky sessions using ip_hash or the hash directive with a consistent hashing key. Configure appropriate health checks with max_fails and fail_timeout parameters to handle backend failures gracefully. Remember that WebSocket connections are long-lived, so load distribution happens when connections are established rather than on a per-request basis.
What is the difference between ws:// and wss:// in proxy configurations?
The ws:// scheme represents unencrypted WebSocket connections, while wss:// represents WebSocket Secure connections encrypted with TLS. In Nginx configurations, you typically use proxy_pass http:// regardless of whether the client connects via ws:// or wss://, because Nginx handles TLS termination separately from proxying. The backend server receives unencrypted traffic over HTTP while Nginx manages the TLS encryption for client connections. In Apache configurations using mod_proxy_wstunnel, you specify ws:// or wss:// in the ProxyPass directive to indicate the protocol for the backend connection.