Error Medic

Jira Connection Refused and Slow Performance: A Complete Configuration Troubleshooting Guide

Fix Jira 'connection refused', timeouts, and crashes by tuning Tomcat server.xml, adjusting JVM memory allocation (Xms/Xmx), and optimizing database connections

Last updated:
Last verified:
2,351 words
Key Takeaways
  • OOM (Out of Memory) errors and JVM garbage collection pauses are the most common causes of Jira crashes and slow performance.
  • Connection pool exhaustion in the database configuration (dbconfig.xml) leads to 'Jira timeout' and 'connection refused' errors.
  • Incorrect Tomcat connector settings in server.xml can cause thread starvation under heavy concurrent load.
  • Quick Fix: Increase JVM memory (Xmx) in setenv.sh and bump max pool size in dbconfig.xml if actively crashing.
Fix Approaches Compared
MethodWhen to UseTimeRisk
Increase JVM Heap (Xmx)Frequent OOM errors, GC overhead limit exceeded, or constant swapping.5 minsLow
Tune DB Connection Pool'Timeout waiting for connection from pool' errors in atlassian-jira.log.10 minsMedium
Adjust Tomcat ThreadsSlow response times, 'connection refused' during peak hours.10 minsMedium
Rebuild IndexesSearch is broken, slow performance on issue dashboards.30-120 minsHigh (Downtime)

Understanding Jira Configuration Errors

When administrators report that Jira is "not working", "crashing", or suffering from "slow performance", the root cause almost always traces back to resource exhaustion stemming from improper configuration. Jira is a complex, monolithic Java application that relies heavily on its database backend, disk I/O for search indexing, and CPU for rendering complex agile boards and dashboards. As your user base, total issue count, and custom field complexity grow, the default configurations provided out-of-the-box quickly become insufficient, leading to instability.

The most common symptoms administrators encounter include:

  • HTTP Status 500 - Internal Server Error or entirely blank screens when attempting to load issues.
  • java.lang.OutOfMemoryError: Java heap space or GC overhead limit exceeded found in the atlassian-jira.log.
  • java.net.ConnectException: Connection refused when trying to access the web interface via a reverse proxy or directly.
  • Timeout waiting for connection from pool database errors indicating connection exhaustion.
  • Unresponsive UI components, particularly when loading large Kanban boards or generating complex reports.

Understanding how Jira's architecture interlocks—Tomcat handling HTTP requests, the JVM managing memory and garbage collection, and the connection pool managing database throughput—is critical for effective troubleshooting. A bottleneck in any one of these layers can cascade, presenting as a complete application failure to the end-user.

Step 1: Diagnose the Root Cause Thoroughly

Before making any configuration changes, you must empirically identify the bottleneck. Modifying configurations blindly, often referred to as "guess and check" administration, can introduce new instability. The primary source of truth for a Jira Server or Data Center instance is the atlassian-jira.log file, typically located in <JIRA_HOME>/log/. Secondary sources include the Tomcat catalina.out log and the application access logs.

First, check for memory-related crashes. When the JVM runs out of heap space, it spends all its CPU cycles attempting garbage collection, which freezes the application.

# Search for OutOfMemory errors in the primary log
grep -i 'OutOfMemoryError' /var/atlassian/application-data/jira/log/atlassian-jira.log

# Check for Garbage Collection overhead limits
grep -i 'GC overhead limit exceeded' /var/atlassian/application-data/jira/log/atlassian-jira.log

Next, look for database connection pool exhaustion. If threads are waiting for a database connection and the pool is full, HTTP requests will queue up and eventually time out, leading to 502 Bad Gateway or 504 Gateway Timeout errors if you are behind a reverse proxy like Nginx or Apache.

grep -i 'Timeout waiting for connection' /var/atlassian/application-data/jira/log/atlassian-jira.log

If the application is completely unreachable (e.g., connection refused), verify whether the Tomcat process is actually listening on the expected port, and check the catalina.out file for startup failures, thread starvation, or fatal plugin initialization errors.

# Verify the process is listening
netstat -tulpn | grep 8080

# Check Tomcat startup logs
tail -n 200 /opt/atlassian/jira/logs/catalina.out

Step 2: Fix JVM Memory Allocation (The setenv.sh Configuration)

If your diagnostics revealed OutOfMemoryError exceptions, or if you notice the Java process constantly utilizing near 100% CPU (often a sign of a "GC spiral of death"), you need to adjust the Java Virtual Machine (JVM) heap size. This is configured in the setenv.sh (Linux) or setenv.bat (Windows) file located in the <JIRA_INSTALL>/bin/ directory.

The default heap size allocated to Jira is typically quite small (often 1GB or 2GB), which is insufficient for production environments.

  1. Navigate to your Jira installation directory and open <JIRA_INSTALL>/bin/setenv.sh with a text editor like vim or nano.
  2. Locate the JVM_MINIMUM_MEMORY and JVM_MAXIMUM_MEMORY parameters.
  3. Increase these values based on your server's available RAM, your concurrent user count, and your total issue volume.

A common best practice for performance stability is to set both the minimum (Xms) and maximum (Xmx) heap sizes to the exact same value. This prevents the JVM from spending CPU cycles dynamically resizing the heap during peak loads.

# Example setenv.sh adjustment for a mid-sized enterprise instance
JVM_MINIMUM_MEMORY="8192m"
JVM_MAXIMUM_MEMORY="8192m"

Critical Warning: Never allocate more than 50% to 60% of your total system RAM to the JVM heap. The operating system needs RAM for its own processes, and crucially, Jira relies heavily on the OS page cache for high-speed file I/O, especially for reading the Lucene search indexes located in <JIRA_HOME>/caches. If you starve the OS of RAM, disk swapping will occur, drastically reducing performance.

Step 3: Tune Database Connection Pooling (dbconfig.xml)

If Jira is throwing timeout errors, or if slow performance correlates with high database load, your connection pool might be undersized. Every time a user loads an issue, performs a JQL search, or views a dashboard, Jira requests one or more connections from the pool.

This is configured in the <JIRA_HOME>/dbconfig.xml file. Look for the <pool-max-size> property. The default is often 20. In an enterprise environment, this will bottleneck quickly.

  1. Edit <JIRA_HOME>/dbconfig.xml.
  2. Increase <pool-max-size>. A typical adjustment for a busy instance is between 40 and 100, depending on hardware capacity.
  3. You may also want to adjust <pool-min-size> to maintain a healthy baseline of ready connections.
<!-- Example tuning in dbconfig.xml -->
<jdbc-datasource>
    <url>jdbc:postgresql://db.example.com:5432/jiradb</url>
    <driver-class>org.postgresql.Driver</driver-class>
    <username>jirauser</username>
    <password>secure_password</password>
    <pool-min-size>20</pool-min-size>
    <pool-max-size>100</pool-max-size>
    <pool-max-wait>30000</pool-max-wait>
    <pool-max-idle>20</pool-max-idle>
    <pool-remove-abandoned>true</pool-remove-abandoned>
    <pool-remove-abandoned-timeout>300</pool-remove-abandoned-timeout>
</jdbc-datasource>

Crucial Database Server Alignment: Increasing the pool size in Jira is only half the battle. You must ensure your database server (e.g., PostgreSQL, MySQL, Oracle, MS SQL) is configured to accept at least this many concurrent connections. For example, in PostgreSQL, you must increase max_connections in postgresql.conf to a value higher than Jira's pool-max-size plus any connections needed by Confluence, Bitbucket, or administrative tools sharing the database server.

Step 4: Optimize Tomcat Connectors (server.xml)

If you are seeing "connection refused" errors or severe latency while the Java process is still running and memory looks healthy, Tomcat might have exhausted its pool of worker threads. This is managed in <JIRA_INSTALL>/conf/server.xml.

Every incoming HTTP request requires a Tomcat thread. If all threads are busy (e.g., waiting on a slow database query or a slow external API call from a plugin), new connections will be queued or refused.

  1. Open <JIRA_INSTALL>/conf/server.xml.
  2. Locate the <Connector> element handling HTTP requests (usually port 8080).
  3. Adjust the maxThreads and acceptCount attributes.
<Connector port="8080" relaxedPathChars="[]|" relaxedQueryChars="[]|{}^&#x5c;&#x60;&quot;&lt;&gt;"
           maxThreads="300" minSpareThreads="25" connectionTimeout="20000" enableLookups="false"
           acceptCount="200" protocol="HTTP/1.1"
           redirectPort="8443" />
  • maxThreads: The maximum number of concurrent request processing threads. Increase this cautiously (e.g., from the default 150 to 300). Be aware that each thread consumes native memory outside the JVM heap.
  • acceptCount: The maximum queue length for incoming connection requests when all possible request processing threads are in use. If the queue is full, the operating system will refuse the connection (resulting in Connection Refused).

Step 5: Network Configuration and Reverse Proxy Troubleshooting

Often, what appears to be a "Jira connection refused" or "Jira timeout" error is not an issue with Jira itself, but rather a misconfiguration in the network layer or the reverse proxy (Nginx, Apache HTTP Server, HAProxy, or an Application Load Balancer) sitting in front of the application.

If Jira is configured to run behind a proxy, Tomcat must be aware of it so it can correctly generate absolute URLs and handle redirects. If this configuration is missing or incorrect, users might experience infinite redirect loops, mixed content warnings (HTTP over HTTPS), or login failures.

Check your server.xml <Connector> configuration for the required proxy attributes:

<Connector port="8080" relaxedPathChars="[]|" relaxedQueryChars="[]|{}^&#x5c;&#x60;&quot;&lt;&gt;"
           maxThreads="300" minSpareThreads="25" connectionTimeout="20000" enableLookups="false"
           acceptCount="200" protocol="HTTP/1.1"
           scheme="https" proxyName="jira.yourcompany.com" proxyPort="443"
           secure="true" />

Ensure that your reverse proxy is configured with appropriate timeout values. If a complex Jira report takes 90 seconds to generate, but your Nginx proxy_read_timeout is set to the default 60 seconds, Nginx will drop the connection and return a 504 Gateway Timeout to the user, even though Jira is still actively processing the request in the background.

Example Nginx adjustment:

location / {
    proxy_pass http://localhost:8080;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_addrs;
    
    # Increase timeouts for slow Jira operations
    proxy_connect_timeout 300s;
    proxy_send_timeout 300s;
    proxy_read_timeout 300s;
}

Step 6: Mitigating Rogue Third-Party Plugins and Apps

Jira's architecture allows third-party apps (plugins) to execute code directly within the same JVM space as the core application. A poorly written plugin, or a plugin incompatible with your current Jira version, can easily consume all available heap memory, exhaust the database connection pool, or create deadlocks that crash the entire system.

If you experience sudden crashes or severe slow performance immediately after an upgrade or installing a new app, the app is the primary suspect.

To troubleshoot app-related issues:

  1. Check the logs for stack traces originating from namespaces other than com.atlassian.* (e.g., com.vendorname.jira.plugin.*).
  2. Enable Jira's "Safe Mode." This temporarily disables all user-installed apps. If the performance issues disappear in Safe Mode, you have confirmed a plugin is the culprit.
  3. You can enable Safe Mode via the UI (Manage Apps -> Enter Safe Mode) or, if the UI is inaccessible, by modifying the database directly (though this is highly risky and should only be done with Atlassian Support guidance).
  4. Once in Safe Mode, re-enable apps one by one, monitoring performance and logs to identify the specific offender.

Step 7: Address Lucene Index Issues

Sometimes performance problems are entirely isolated to search functionality, JQL auto-complete, or dashboard gadgets rendering slowly. This often indicates corruption, fragmentation, or sub-optimal storage configuration for the Lucene indexes stored on the filesystem.

If atlassian-jira.log shows LockObtainFailedException, generic Lucene read errors, or warnings about excessive index search times:

  1. Navigate to Administration > System > Indexing.
  2. Perform a Background Index if the system is live but search is just slightly delayed or missing recent issues.
  3. Perform a Lock Jira and rebuild index if the index is completely corrupted or search is entirely broken. Note that this requires scheduled downtime as the instance will be inaccessible during the rebuild.

Infrastructure Note: Ensure the disk hosting <JIRA_HOME>/caches/indexesV1 is highly performant. Utilizing local NVMe or enterprise SSD arrays is strongly recommended over network-attached storage (NAS/NFS) for the Jira home directory, as the latency inherent in network storage drastically impacts Lucene's read/write performance.

Step 8: Restart, Monitor, and Verify

After making any configuration changes to setenv.sh, dbconfig.xml, or server.xml, you must perform a clean restart of the Jira service for the changes to take effect into the JVM runtime.

# Restart using systemd (recommended for modern Linux)
systemctl restart jira

# Or restart using the Atlassian-provided scripts
/opt/atlassian/jira/bin/stop-jira.sh
# Ensure the process has completely terminated before starting
sleep 10
/opt/atlassian/jira/bin/start-jira.sh

Monitor the catalina.out file continuously during startup. This is critical to ensure no syntax errors (like unclosed XML tags or malformed bash variables) were introduced during your manual edits. A syntax error in server.xml will prevent the Tomcat container from binding to the listening port, leaving the application completely down and presenting a "connection refused" state.

tail -f /opt/atlassian/jira/logs/catalina.out

Post-restart, continuously monitor the application's performance using Application Performance Monitoring (APM) tools (like AppDynamics, Dynatrace, New Relic, or Datadog) or via Atlassian's built-in Java Management Extensions (JMX) metrics. Observe the garbage collection frequency, heap utilization, and database connection pool saturation to empirically confirm your tuning has resolved the underlying bottlenecks and restored stability to the enterprise instance.

Frequently Asked Questions

bash
# 1. Check for OutOfMemory errors
grep -i 'OutOfMemoryError' /var/atlassian/application-data/jira/log/atlassian-jira.log

# 2. Check for Database connection pool exhaustion
grep -i 'Timeout waiting for connection' /var/atlassian/application-data/jira/log/atlassian-jira.log

# 3. View real-time application logs to monitor a crash
tail -f /var/atlassian/application-data/jira/log/atlassian-jira.log | grep -i 'error\|warn\|fatal'

# 4. Check system memory usage to see if JVM is swapping
free -h

# 5. Backup configuration files before editing
cp /opt/atlassian/jira/bin/setenv.sh /opt/atlassian/jira/bin/setenv.sh.bak
cp /var/atlassian/application-data/jira/dbconfig.xml /var/atlassian/application-data/jira/dbconfig.xml.bak
cp /opt/atlassian/jira/conf/server.xml /opt/atlassian/jira/conf/server.xml.bak
E

Error Medic Editorial

Our team of certified Atlassian administrators and Site Reliability Engineers specializes in resolving complex enterprise scaling challenges, ensuring high availability for critical development tools.

Sources

Related Guides