Error Medic

Troubleshooting HAProxy 'Connection Refused' Errors: A Complete Guide

Resolve HAProxy connection refused errors quickly. Discover the root causes—from backend down states to firewall issues—and learn step-by-step troubleshooting.

Last updated:
Last verified:
2,021 words
Key Takeaways
  • Connection refused usually indicates a Layer 4 TCP failure between the client and HAProxy, or HAProxy and the backend servers.
  • Backend servers may be down, crashed, or not listening on the expected IP address and port (e.g., listening on 127.0.0.1 instead of 0.0.0.0).
  • Firewall rules (iptables, UFW, firewalld) or security modules (SELinux, AppArmor) are frequently responsible for dropping or rejecting packets between the load balancer and backends.
  • HAProxy configuration errors, such as incorrect backend IP definitions or overly aggressive health checks, can force backends into a DOWN state.
  • Quick Fix: First run `nc -vz <backend_ip> <port>` from the HAProxy node to test raw connectivity, then verify the backend service status and local firewall rules.
Diagnostic Approaches Compared
MethodWhen to UseTimeRisk
Netcat (nc) / TelnetInitial validation of Layer 4 TCP connectivity from HAProxy to backend.1 minLow
HAProxy Stats PageChecking real-time health check statuses, connection limits, and server states.2 minsLow
TCPDump / WiresharkDeep packet inspection when connections are inexplicably dropped or reset (RST).15 minsMedium (can impact I/O under heavy load)
SELinux Audit LogsWhen running on RHEL/CentOS and HAProxy cannot connect to non-standard ports.5 minsLow

Understanding 'Connection Refused' in HAProxy

When you encounter a 'Connection Refused' error—often materializing as an HTTP 502 Bad Gateway, an HTTP 503 Service Unavailable, or a raw TCP RST (Reset) packet—it signifies an immediate failure to establish a TCP connection. In an architecture utilizing HAProxy, this error can occur at two distinct boundaries: the 'Frontend' (between the client and HAProxy) and the 'Backend' (between HAProxy and your upstream servers). Diagnosing this requires isolating which side of the proxy is failing.

Often, this issue is accompanied by user reports that 'HAProxy is not working' or 'HAProxy is slow'. While slowness is usually related to latency, timeouts, or resource exhaustion (like maxconn limits), a hard 'connection refused' is almost always a binary state: the target port is closed, the process is dead, or a firewall is actively rejecting the traffic.

Diagnosing the Scope of the Problem

Before executing commands that alter system state, we must determine the exact location of the failure.

1. The Frontend Failure (Client to HAProxy)

If a client attempts to connect to your service (e.g., via curl https://api.example.com) and receives an immediate curl: (7) Failed to connect to api.example.com port 443: Connection refused, the issue lies at the HAProxy entry point. This means HAProxy itself is either not running, has crashed, or is not binding to the expected port.

To verify this on the HAProxy server, check the listening sockets: sudo ss -tulpn | grep haproxy

If HAProxy is not listed, check its service status and system logs to determine why it failed to start: sudo systemctl status haproxy sudo journalctl -xeu haproxy.service

Common frontend binding issues include another service (like Nginx or Apache) already occupying port 80 or 443, or insufficient privileges (HAProxy must start as root to bind to ports below 1024).

2. The Backend Failure (HAProxy to Upstream)

If the client receives an HTTP 503 error containing a message like No server is available to handle this request, the HAProxy frontend is successfully accepting connections, but it cannot forward them to the backend pool.

Looking into the HAProxy logs (typically located at /var/log/haproxy.log), you will likely find entries resembling: haproxy[12345]: backend backend_api has no server available! haproxy[12345]: Server backend_api/node1 is DOWN, reason: Layer4 connection problem, info: "Connection refused" at step 1 of tcp-check

This confirms HAProxy is attempting a TCP handshake (sending a SYN packet), but the upstream server is responding with a RST, ACK packet, actively refusing the connection.

Step-by-Step Backend Troubleshooting

Once you have isolated the issue to the backend, follow these structured steps to identify the root cause.

Step 1: Verify the Backend Service is Running and Bound Correctly

Log into the backend server (node1 in our example). The most frequent cause of a refused connection is that the application service (Node.js, Python/Gunicorn, Java/Tomcat) has crashed or was never started.

Check the service status: sudo systemctl status my-application

If the service is running, you must verify the IP address it is bound to. Developers often accidentally bind services to localhost (127.0.0.1). If a service listens only on localhost, it will refuse external connections from HAProxy.

Check the listening interfaces: sudo ss -tulpn | grep <application_port>

Look for 0.0.0.0:<port> or the specific internal IP address (e.g., 10.0.0.5:<port>). If it shows 127.0.0.1:<port>, you must reconfigure your application to listen on all interfaces or the specific network interface HAProxy targets.

Step 2: Validate Network Connectivity from HAProxy

Return to the HAProxy server. We must bypass HAProxy entirely and attempt a raw TCP connection to the backend to eliminate HAProxy configuration as the variable.

Use netcat (nc) or telnet to test the connection: nc -vz 10.0.0.5 8080

If the output is nc: connect to 10.0.0.5 port 8080 (tcp) failed: Connection refused, you have confirmed the issue is at the network layer or the operating system layer of the backend server.

Step 3: Investigate Firewalls and Network Security Groups

If the service is listening on 0.0.0.0 but nc fails from HAProxy, a firewall is likely blocking or rejecting the traffic. Unlike 'dropped' packets (which result in timeouts), 'connection refused' means the firewall rule is explicitly set to REJECT the packet, which sends a TCP RST back to the sender.

On the backend server, check local firewall rules:

  • For Ubuntu/Debian using UFW: sudo ufw status
  • For RHEL/CentOS using Firewalld: sudo firewall-cmd --list-all
  • For raw iptables: sudo iptables -L -n -v | grep 8080

Ensure there is an explicit ALLOW rule for the HAProxy IP address to access the backend port.

Additionally, if your infrastructure is hosted on AWS, GCP, or Azure, verify the cloud provider's Network Security Groups (NSGs) or Security Group rules. Ensure the HAProxy instance's security group is allowed ingress to the backend instance's security group on the required port.

Step 4: SELinux and AppArmor Policies

On Red Hat Enterprise Linux (RHEL), CentOS, or Fedora, SELinux is enabled by default. SELinux strictly controls which ports applications can bind to and which external connections they can make.

If HAProxy is attempting to connect to a non-standard port (e.g., 8080, 8443) and returning 503s, SELinux on the HAProxy server itself might be blocking the outbound connection.

Check the SELinux audit logs on the HAProxy server: sudo tail /var/log/audit/audit.log | grep haproxy | grep denied

If you see denials, you must instruct SELinux to allow HAProxy to make arbitrary outbound network connections. This is done by toggling an SELinux boolean: sudo setsebool -P haproxy_connect_any 1

Step 5: HAProxy Health Check Configuration

Sometimes the backend service is perfectly healthy, but HAProxy misinterprets its status due to a misconfigured health check (option httpchk or check). If a health check fails, HAProxy marks the server as DOWN. Any subsequent client requests routed to that backend will immediately fail, often manifesting as a 503 Service Unavailable.

Review your haproxy.cfg backend definition:

backend app_cluster
    balance roundrobin
    option httpchk GET /health HTTP/1.1\r\nHost:\ api.example.com
    server node1 10.0.0.5:8080 check inter 2000 rise 2 fall 3

If the application's /health endpoint requires authentication, has moved, or is returning a 4xx/5xx HTTP status code, HAProxy will sever the connection. Manually test the exact health check URL from the HAProxy server using curl -v http://10.0.0.5:8080/health -H "Host: api.example.com" to ensure it returns an HTTP 200 OK.

Step 6: Resource Exhaustion (The 'HAProxy Slow' Phenomenon)

If 'connection refused' errors occur intermittently, especially during high traffic periods, you are likely experiencing resource exhaustion.

  1. Max Connections (maxconn): Both HAProxy and the backend servers have limits on concurrent connections. If HAProxy reaches its global maxconn limit, it will stop accepting new client connections. If a backend server reaches its maxconn parameter, HAProxy will queue requests. If the queue fills up, connections are dropped.

  2. Ephemeral Port Exhaustion: When HAProxy connects to a backend, it uses a local ephemeral port. If you have massive throughput without HTTP keep-alives, HAProxy will rapidly burn through the available local ports (usually 32,768 to 60,999). When no ports are left, connect() system calls fail.

To diagnose port exhaustion, run netstat -an | grep TIME_WAIT | wc -l. If this number is extremely high (e.g., > 25,000), you need to tune your sysctl parameters on the HAProxy server to expand the port range and recycle connections faster:

sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535" sudo sysctl -w net.ipv4.tcp_tw_reuse=1

Advanced Debugging with TCPDump

When logs and standard commands fail to reveal the issue, packet capture is the ultimate source of truth. Use tcpdump on the HAProxy server to monitor traffic between the load balancer and the backend.

sudo tcpdump -i any -nn -S -vvv host 10.0.0.5 and port 8080

Analyze the TCP flags. A healthy handshake looks like:

  1. HAProxy -> Backend: Flags [S] (SYN)
  2. Backend -> HAProxy: Flags [S.] (SYN-ACK)
  3. HAProxy -> Backend: Flags [.] (ACK)

If you see: HAProxy -> Backend: Flags [S] Backend -> HAProxy: Flags [R.] (RST-ACK) This irrefutably proves the backend OS is actively rejecting the connection (e.g., closed port or iptables REJECT rule).

If you see repeated Flags [S] with no response at all, the packets are being silently dropped by a firewall (iptables DROP rule or Cloud Security Group).

Conclusion

Resolving HAProxy connection refused errors demands a systematic, layer-by-layer approach. By isolating the failure to either the frontend or backend, utilizing raw socket tools like netcat, inspecting local firewalls and SELinux, and scrutinizing health check logic, you can quickly restore highly available traffic flow. Remember that in distributed systems, the load balancer is often the messenger; the root cause frequently lies in backend configuration or network topology.

Frequently Asked Questions

bash
# 1. Check if HAProxy is listening on the expected port (Frontend check)
sudo ss -tulpn | grep haproxy

# 2. Test raw TCP connectivity from HAProxy to Backend (Backend check)
nc -vz 10.0.0.5 8080

# 3. Check backend local firewall rules (Ubuntu/Debian)
sudo ufw status

# 4. Check backend local firewall rules (CentOS/RHEL)
sudo firewall-cmd --list-all

# 5. Fix SELinux preventing HAProxy outbound connections (CentOS/RHEL)
sudo setsebool -P haproxy_connect_any 1

# 6. View HAProxy logs for specific backend failure reasons
sudo grep 'haproxy' /var/log/syslog | tail -n 50
# or on systemd systems:
sudo journalctl -u haproxy -n 50 --no-pager

# 7. Deep packet inspection for connection resets (RST)
sudo tcpdump -i any -nn -S -vvv host 10.0.0.5 and port 8080
D

DevOps SRE Team

The SRE Team specializes in building resilient, highly available Linux infrastructure, load balancing, and network troubleshooting at scale.

Sources

Related Guides