Error Medic

Resolving Confluence Connection Refused & Timeout Errors: Complete Configuration Guide

Fix Confluence connection refused, timeout, and slow performance issues by tuning Tomcat server.xml, adjusting JVM heap memory, and validating database pools.

Last updated:
Last verified:
1,217 words
Key Takeaways
  • Exhausted Tomcat or database connection pools leading to Confluence timeout and 'Connection Refused' HTTP 502/504 errors.
  • Insufficient JVM memory (Heap/Metaspace) causing slow performance, high CPU usage, and java.lang.OutOfMemoryError crashes.
  • Reverse proxy (NGINX/Apache) misconfiguration or incorrect connector settings in server.xml blocking external traffic.
  • Quick fix: Restart the Confluence service, increase maxThreads in server.xml, and adjust -Xms / -Xmx flags in setenv.sh.
Fix Approaches Compared
MethodWhen to UseTimeRisk
Restart Confluence ServiceImmediate mitigation for memory leaks, frozen UI, or locked threads.5 minsLow (Brief downtime required)
Tune JVM Memory (setenv.sh)Resolving overall slow performance, frequent garbage collection pauses, and OutOfMemory crashes.15 minsMedium (Requires monitoring)
Adjust Tomcat Threads (server.xml)Fixing 'connection refused' or dropped requests during peak user load.20 minsMedium
Database Connection Pool TuningFixing timeouts during data migration or heavy read/write operations.30 minsMedium

Understanding the Error

When managing Atlassian Confluence in an enterprise environment, administrators frequently encounter a cluster of related issues: Confluence connection refused, Confluence timeout, and overall Confluence slow performance. These symptoms usually point to bottlenecks in the underlying Tomcat application server, the JVM runtime environment, or the database connection layer.

You might see errors in your atlassian-confluence.log or catalina.out such as:

  • java.net.ConnectException: Connection refused
  • java.lang.OutOfMemoryError: Java heap space
  • org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run SEVERE: Thread pool is completely exhausted
  • HTTP Status 504 - Gateway Timeout (from your reverse proxy)

These errors occur when the application cannot process incoming HTTP requests fast enough, either because it has run out of worker threads, ran out of available memory, or is waiting indefinitely for a database connection.

Step 1: Diagnose the Bottleneck

Before making configuration changes, you must identify which layer is failing.

1. Check Application Logs Navigate to your Confluence installation directory and inspect the primary logs:

tail -n 500 /var/atlassian/application-data/confluence/logs/atlassian-confluence.log | grep -i -E "error|exception|timeout|refused"

If you see Timeout waiting for idle object or similar database pool errors, your database connection pool is exhausted.

2. Monitor System Resources Use top or htop to check if the Java process consuming Confluence is maxing out CPU or Memory. High CPU usage coupled with slow performance often indicates the JVM is spending all its time performing Garbage Collection (GC) because the heap is too small.

3. Validate Port Binding If you receive a direct "Connection Refused" when accessing Confluence locally (e.g., curl -v http://localhost:8090), verify Tomcat is actually listening on the expected port:

netstat -tulpn | grep 8090

Step 2: Fix Tomcat Thread Exhaustion (server.xml)

If Confluence is returning connection refused during peak hours, Tomcat might be dropping requests because maxThreads is too low.

  1. Locate your server.xml file, typically found in <CONFLUENCE_INSTALL_DIR>/conf/server.xml.
  2. Find the <Connector> block handling HTTP traffic (usually port 8090).
  3. Increase the maxThreads and acceptCount values.
<Connector port="8090" connectionTimeout="20000" redirectPort="8443"
           maxThreads="200" minSpareThreads="10"
           enableLookups="false" acceptCount="100" debug="0" URIEncoding="UTF-8"
           protocol="org.apache.coyote.http11.Http11NioProtocol" />
  • maxThreads: The maximum number of simultaneous requests Tomcat will process. Increase this to 200 or 300 for enterprise environments.
  • acceptCount: The queue length for incoming requests when all threads are busy.

Step 3: Resolve Slow Performance & OOM Crashes (setenv.sh)

Confluence requires substantial memory. The default JVM settings are rarely sufficient for an enterprise deployment, leading to "Confluence not working" complaints and agonizingly slow page loads.

  1. Open your JVM configuration file: <CONFLUENCE_INSTALL_DIR>/bin/setenv.sh (or setenv.bat on Windows).
  2. Locate the CATALINA_OPTS or JAVA_OPTS line defining memory flags.
  3. Adjust the Minimum (-Xms) and Maximum (-Xmx) Heap space.
# Example for a server with 16GB RAM dedicated to Confluence
CATALINA_OPTS="-Xms8192m -Xmx8192m -XX:+UseG1GC ${CATALINA_OPTS}"

Best Practice: Set -Xms and -Xmx to the exact same value to prevent the JVM from constantly resizing the heap, which causes micro-pauses and degrades performance.

Step 4: Fix Timeouts During Data Migration

When performing a Confluence data migration (e.g., Server to Data Center, or importing a massive XML backup), the process may timeout. This is often due to the reverse proxy (NGINX/Apache) dropping the connection before the backend Tomcat server finishes the heavy lifting.

If you are using NGINX, adjust the timeout directives in your nginx.conf or site block:

location / {
    proxy_pass http://localhost:8090;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    
    # Increase timeouts for heavy operations like migrations
    proxy_read_timeout 600s;
    proxy_connect_timeout 600s;
    proxy_send_timeout 600s;
}

After updating, reload the proxy: sudo systemctl reload nginx.

Step 5: Database Connection Pool Tuning

If Confluence hangs and logs show database timeout errors, you must increase the maximum pool size. This is configured in <CONFLUENCE_HOME>/confluence.cfg.xml.

Look for the hibernate.hikari.maximumPoolSize property:

<property name="hibernate.hikari.maximumPoolSize">100</property>

Increase this value (e.g., from 60 to 100). Ensure your actual database server (PostgreSQL, MySQL) is also configured to accept this many connections (e.g., max_connections in postgresql.conf).

Conclusion

Effective Confluence troubleshooting requires a holistic view of the stack. By sequentially ruling out JVM memory exhaustion, Tomcat thread limits, reverse proxy timeouts, and database pool restrictions, you can stabilize the application and prevent recurring connection refused and timeout errors.

Frequently Asked Questions

bash
# Quick diagnostic script for Confluence health checks

# 1. Check if the Java process for Confluence is running
echo "--- Checking Confluence Process ---"
ps aux | grep java | grep confluence

# 2. Check if Tomcat is actively listening on the default HTTP port (8090)
echo "--- Checking Port 8090 ---"
netstat -tulpn | grep 8090

# 3. Tail the last 100 lines of the Confluence log, highlighting critical errors
echo "--- Scanning Logs for Errors ---"
tail -n 100 /var/atlassian/application-data/confluence/logs/atlassian-confluence.log | grep -iE --color "error|exception|refused|timeout|OutOfMemory"

# 4. Check system memory usage to look for overall OS constraints
echo "--- System Memory Status ---"
free -m
E

Error Medic Editorial

Written by our team of Senior Site Reliability Engineers specializing in Atlassian infrastructure, application performance tuning, and enterprise cloud architecture.

Sources

Related Guides