Error Medic

Fixing MySQL 'Too many connections' (Error 1040) and Related Crashes

Resolve the MySQL 'Too many connections' error, fix connection refused issues, and diagnose OOM crashes with our comprehensive database troubleshooting guide.

Last updated:
Last verified:
1,423 words
Key Takeaways
  • The 'Too many connections' error (1040) occurs when concurrent client connections exceed the max_connections system variable.
  • Connection leaks in application code (failing to close DB connections) are the most common root cause.
  • A quick temporary fix is to dynamically increase max_connections, but long-term stability requires connection pooling or code fixes.
  • High connection counts often lead to secondary failures like MySQL high CPU, out of memory (OOM) crashes, and 502 Bad Gateway errors on the application side.
Connection Limit Fix Approaches Compared
MethodWhen to UseTime to ImplementRisk Level
Increase max_connections dynamicallyEmergency mitigation during traffic spikes1 minuteMedium (Can cause OOM if memory is insufficient)
Implement Connection Pooling (e.g., ProxySQL, PGBouncer equivalent)High traffic applications with many short-lived connectionsHours/DaysLow (Improves overall stability)
Fix Application Connection LeaksProcesslist shows many 'Sleeping' connectionsDays/WeeksLow (Permanent architectural fix)
Decrease interactive_timeout / wait_timeoutWhen old, idle connections linger too long5 minutesLow to Medium (Might drop legitimate long-running scripts)

Understanding the Error

The MySQL Too many connections error (Error 1040) is a critical bottleneck that prevents new clients from interacting with your database. When this happens, application servers often return HTTP 500 or 502 Bad Gateway errors because they cannot establish a database session.

Behind the scenes, MySQL maintains a hard limit on concurrent connections dictated by the max_connections variable. When Threads_connected reaches max_connections, MySQL actively refuses new attempts. This isn't just a configuration quirk; it's a self-preservation mechanism. Each connection consumes RAM (thread stack, sort buffers, join buffers). If MySQL allowed infinite connections, it would eventually exhaust system memory, leading to a mysql out of memory (OOM) crash triggered by the Linux kernel's OOM Killer, or severe mysql high cpu usage due to context switching overhead.

Symptoms and Related Failures

When connection limits are hit or resources are exhausted, you might encounter a cascade of related errors:

  • mysql connection refused: Often occurs right after a mysql crash or when the service drops. If the daemon (mysqld) is dead, nothing can connect.
  • mysql service not starting / mysql failed: If an OOM event killed MySQL, or if the mysql disk full condition prevents writing to the PID file or InnoDB redo logs, the service will fail to restart.
  • mysql slow: Before hitting the hard limit, the server might thrash, causing queries to back up and execution times to spike.

Step 1: Diagnose the Current State

First, you need to gain access. If you cannot log in normally because all connections are consumed, you can use the EXTRA_PORT (if configured) or log in as the SUPER user. By default, MySQL reserves one extra connection slot specifically for a user with the SUPER or CONNECTION_ADMIN privilege so you can always get in to troubleshoot.

Once logged in, check the current connections and limits:

SHOW GLOBAL VARIABLES LIKE 'max_connections';
SHOW GLOBAL STATUS LIKE 'Max_used_connections';
SHOW GLOBAL STATUS LIKE 'Threads_connected';

Next, look at what the connections are actually doing:

SHOW FULL PROCESSLIST;

Analyze the output. Are the connections actively executing queries, or are they sitting in a Sleep state?

  • Many 'Sleep' state connections: This strongly points to a connection leak in your application (the app opens a connection, executes a query, but forgets to close the connection) or a missing connection pooler.
  • Many active, slow queries: This indicates a performance bottleneck (missing indexes, bad query design) causing connections to pile up because they take too long to resolve. This often correlates with mysql high cpu.

Step 2: Immediate Mitigation (The Quick Fix)

If your server has sufficient RAM, the fastest way to restore service is to dynamically increase the max_connections limit.

Note: Do not increase this blindly. Check your available memory first using free -m.

SET GLOBAL max_connections = 500;

To make this permanent across reboots, you must also add it to your my.cnf or mysqld.cnf file (usually located in /etc/mysql/ or /etc/my.cnf):

[mysqld]
max_connections = 500

Handling 'Sleep' Connections

If your process list is full of sleeping connections, you can forcibly close them. You can write a quick bash script or use SQL to generate kill commands, or adjust timeout variables:

-- Decrease the time a connection can sit idle before MySQL kills it
SET GLOBAL wait_timeout = 600; 
SET GLOBAL interactive_timeout = 600;

Again, update my.cnf to persist these changes.

Step 3: Addressing Root Causes (OOM and Disk Issues)

Investigating MySQL Out of Memory (OOM) and Core Dumps

If the server actually crashed (mysql crash, mysql core dump), check the Linux system logs for OOM killer activity:

dmesg -T | grep -i 'killed process'
grep -i oom /var/log/syslog /var/log/messages

If MySQL was killed by OOM, increasing max_connections will only make the problem worse. Each connection uses base memory plus per-connection buffers (read_buffer_size, sort_buffer_size). You must either:

  1. Upgrade the server RAM.
  2. Reduce per-connection buffer sizes in my.cnf.
  3. Implement a proxy/pooler (like ProxySQL) to queue connections at the network layer rather than letting them hit the database engine.

Investigating 'MySQL Disk Full'

Sometimes, mysql not working or mysql service not starting is a disk space issue, disguised as a connection or crash problem. Check your disk space:

df -h

If the partition holding /var/lib/mysql is at 100%, MySQL cannot write to its binlogs, relay logs, or InnoDB tablespaces. It will freeze or crash.

  • Fix: Clear old binlogs safely from within MySQL using PURGE BINARY LOGS BEFORE '2023-10-01 00:00:00';. Do not just rm files from the filesystem.

Step 4: Long-Term Architectural Fixes

To permanently solve mysql too many connections solution queries, look to your application architecture:

  1. Connection Pooling: Instead of the application opening a new connection for every web request, use a connection pool (e.g., HikariCP for Java, SQLAlchemy pooling for Python, or a middleware proxy like ProxySQL). This maintains a small number of persistent connections to MySQL and multiplexes application requests through them.
  2. Code Audits: Ensure every try...catch block in your application code contains a finally block that explicitly closes database connections.
  3. Caching: If many connections are fetching the exact same data, put Redis or Memcached in front of your database to reduce read load.

Frequently Asked Questions

bash
# 1. Check if MySQL is running and responding
systemctl status mysql

# 2. Log in to MySQL (requires root/SUPER privileges if max_connections is hit)
mysql -u root -p

# 3. View current connection stats (run inside MySQL prompt)
# SHOW GLOBAL VARIABLES LIKE 'max_connections';
# SHOW GLOBAL STATUS LIKE 'Threads_connected';
# SHOW PROCESSLIST;

# 4. Dynamically increase limits (temporary fix, run inside MySQL)
# SET GLOBAL max_connections = 500;
# SET GLOBAL wait_timeout = 300;

# 5. Check system logs for Out Of Memory (OOM) kills if MySQL crashed
dmesg -T | grep -i 'killed process'

# 6. Check disk space (if MySQL failed to start/crashed)
df -h /var/lib/mysql
E

Error Medic Editorial

Error Medic Editorial comprises senior DevOps engineers, SREs, and database administrators dedicated to solving complex infrastructure bottlenecks and documenting open-source best practices.

Sources

Related Guides