MySQL Too Many Connections: Fix ERROR 1040 (HY000) Fast
Fix MySQL ERROR 1040 Too Many Connections: increase max_connections, kill idle threads, tune wait_timeout, fix connection pools. Step-by-step commands inside.
- ERROR 1040 (HY000): Too many connections fires when active threads reach max_connections (default 151); root causes are misconfigured app connection pools, connection leaks from code that never releases handles, and wait_timeout set to the 8-hour default allowing ghost connections to accumulate
- MySQL crashes (OOM / core dump), disk-full states, and service-not-starting failures all present as connection refusals—diagnose with dmesg, df, and the error log before assuming a max_connections issue
- Immediate fix: run SET GLOBAL max_connections = 500 (no restart needed), kill sleeping threads older than 60 s, set wait_timeout = 180 in my.cnf, then fix the application connection pool size as the permanent solution
| Method | When to Use | Time | Risk |
|---|---|---|---|
| SET GLOBAL max_connections | Immediate stabilization while root cause is investigated | < 1 min | Low — live change, survives until next restart |
| Kill sleeping connections via KILL id | Idle connections from crashed app pods or bad pool config flooding the process list | < 2 min | Low — only terminates idle sessions |
| Tune wait_timeout / interactive_timeout | Default 8 h timeout keeping ghost connections alive long after the app process died | < 5 min (my.cnf + reload) | Low |
| Fix application connection pool (pool_size, release on return) | Root-cause fix when app pools are sized larger than max_connections or connections are never released | 30–60 min (code deploy) | Medium — requires app-side change and deployment |
| Deploy ProxySQL connection multiplexer | 50+ app instances each with their own pool making total connections unmanageable | 1–4 h (new infrastructure) | Medium — additional component to operate |
| Reduce innodb_buffer_pool_size + add swap to stop OOM kills | mysqld is being killed by OOM killer; crash logs show signal 9 or 'killed' | 30 min–1 day (may need VM resize) | High — tuning buffer pool affects query performance |
Understanding the MySQL "Too Many Connections" Error
MySQL enforces a hard ceiling on simultaneous client connections via the max_connections system variable (default: 151 in MySQL 8.0+, 100 in 5.7 and earlier). When every slot is occupied, any new connection attempt—from your application server, a DBA's terminal, or a monitoring agent—fails immediately with:
ERROR 1040 (HY000): Too many connections
This error is almost always a symptom, not the root cause. The actual culprits are misconfigured connection pools, application code that acquires connections without releasing them, or workload growth that outpaced your MySQL configuration years ago.
Related error messages and symptoms you'll see alongside or instead of 1040:
- MySQL 502 Bad Gateway: Nginx, HAProxy, or an AWS load balancer reports upstream unreachable—connection refusal has propagated to the HTTP layer.
Can't connect to MySQL server on 'localhost' (111): The TCP socket is actively refusing connections. Could be max_connections, but also firewall rules, mysqld crashed, or wrong port/host.ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock': The Unix domain socket doesn't exist. mysqld is not running—check for OOM kills or disk-full crashes.- MySQL core dump / crash: Under extreme memory pressure from thousands of open connection threads, the Linux OOM killer terminates the
mysqldprocess. You'll find a core file in the datadir and anAbortedentry in the error log. - MySQL high CPU: Hundreds of threads context-switching burns CPU even when the queries themselves are idle sleeping threads.
- MySQL disk full: Binary logs, InnoDB redo logs, or temporary files exhaust storage. A full disk causes InnoDB to refuse new transactions, which queues up connections until they time out or error.
Step 1: Confirm mysqld Is Running and Check the Error Log
Before touching max_connections, verify the service is alive and understand what actually happened:
# Service status (use 'mysql' on Debian/Ubuntu, 'mysqld' on RHEL/CentOS/Rocky)
systemctl status mysqld
systemctl status mysql
# Check for OOM kill in the kernel ring buffer
dmesg | grep -iE 'oom|killed process|mysqld' | tail -20
# Disk full kills MySQL silently — check storage first
df -h /var/lib/mysql /tmp
# Find and review the MySQL error log
mysql -u root -p -e "SHOW VARIABLES LIKE 'log_error';"
tail -80 /var/log/mysql/error.log
Key phrases to scan for in the error log:
| Phrase | Meaning |
|---|---|
Too many connections |
Confirms 1040 is occurring |
InnoDB: Database was not shut down normally |
mysqld crashed; InnoDB recovery will run on next start |
Out of memory or mmap(... failed) |
OOM pressure—reduce buffer pool or add RAM |
Disk is full writing |
Storage exhaustion—purge binary logs or expand volume |
signal 11 or Aborted |
Crash due to bug or hardware issue |
Step 2: Inspect Live Connection State
MySQL reserves one extra connection slot for users with the CONNECTION_ADMIN privilege (MySQL 8.0+) or SUPER (5.7). This means root can always connect even at max_connections:
-- How close are we to the limit?
SHOW VARIABLES LIKE 'max_connections';
SHOW STATUS LIKE 'Threads_connected';
SHOW STATUS LIKE 'Max_used_connections'; -- peak since last restart
SHOW STATUS LIKE 'Connection_errors_max_connections'; -- refused count
-- Full process list with query preview
SELECT id, user, host, db, command, time, state,
LEFT(IFNULL(info, '[NULL]'), 100) AS query
FROM information_schema.processlist
ORDER BY time DESC
LIMIT 60;
-- Connection breakdown by command type
SELECT command, COUNT(*) AS cnt
FROM information_schema.processlist
GROUP BY command
ORDER BY cnt DESC;
-- Which application hosts are consuming the most connections?
SELECT SUBSTRING_INDEX(host, ':', 1) AS app_host,
user,
COUNT(*) AS connections
FROM information_schema.processlist
GROUP BY app_host, user
ORDER BY connections DESC
LIMIT 20;
If the majority of connections show Command: Sleep with high time values, your application is acquiring connections and never returning them to the pool—this is the single most common root cause of ERROR 1040.
Step 3: Kill Idle Sleeping Connections for Immediate Relief
When you're in an active outage and need connections freed right now:
-- Preview connections to be killed (sleeping > 60 seconds)
SELECT id, user, host, time
FROM information_schema.processlist
WHERE command = 'Sleep' AND time > 60
ORDER BY time DESC;
-- Generate and run kill statements
SELECT CONCAT('KILL ', id, ';')
FROM information_schema.processlist
WHERE command = 'Sleep' AND time > 60;
-- Copy and paste the output, or use a stored procedure (see code_block section)
Note: KILL drops the connection immediately but does not roll back any open transaction—InnoDB handles that automatically.
Step 4: Raise max_connections for Temporary Stabilization
-- Live change — takes effect immediately, no restart required
-- Reverts to my.cnf value on next mysqld restart
SET GLOBAL max_connections = 500;
-- Confirm the change
SHOW VARIABLES LIKE 'max_connections';
Make it permanent. On Ubuntu/Debian, edit /etc/mysql/mysql.conf.d/mysqld.cnf:
[mysqld]
max_connections = 500
wait_timeout = 180
interactive_timeout = 180
connect_timeout = 10
On RHEL/CentOS/Rocky/AlmaLinux, edit /etc/my.cnf or /etc/my.cnf.d/mysql-server.cnf:
[mysqld]
max_connections = 500
wait_timeout = 180
Apply without full restart where possible:
systemctl reload mysqld # Debian/Ubuntu
# RHEL requires a full restart for my.cnf changes:
systemctl restart mysqld
Memory sizing before raising max_connections:
Each MySQL connection thread consumes approximately thread_stack (default 1 MiB) + sort/join buffers + connection overhead. Rough rule:
RAM reserved for connections ≈ max_connections × 2 MiB
500 connections = ~1 GiB. Check available RAM before increasing:
free -m
mysql -u root -p -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
Step 5: Fix the Root Cause — Application Connection Pool Configuration
Raising max_connections without fixing the application is borrowing time. The correct permanent fix is proper connection pool configuration:
Node.js with mysql2:
const mysql = require('mysql2/promise');
const pool = mysql.createPool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
connectionLimit: 20, // size pool to (max_connections / num_app_instances) - overhead
waitForConnections: true, // queue requests rather than error immediately
queueLimit: 100,
idleTimeout: 60000, // release idle connections after 60 s
enableKeepAlive: true,
keepAliveInitialDelay: 10000
});
// Always use pool.query() — it acquires and releases automatically
Python with SQLAlchemy 2.x:
from sqlalchemy import create_engine
engine = create_engine(
"mysql+pymysql://user:pass@host/db",
pool_size=20,
max_overflow=10, # allow burst to 30; set to 0 for strict cap
pool_timeout=30, # raise TimeoutError if no connection within 30 s
pool_recycle=3600, # recycle connections every hour to avoid stale handles
pool_pre_ping=True, # test connection health before use (catches 'gone away')
)
PHP (avoid PDO persistent connections with FPM):
// PDO::ATTR_PERSISTENT=true causes connections to accumulate across FPM workers
// Disable it, or use ProxySQL in front of MySQL instead
$pdo = new PDO($dsn, $user, $pass, [
PDO::ATTR_PERSISTENT => false,
PDO::ATTR_TIMEOUT => 5,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
]);
Step 6: Reduce wait_timeout to Reclaim Ghost Connections
MySQL's default wait_timeout is 28800 seconds — 8 hours. A sleeping connection from an application pod that crashed at 9 AM can still occupy a slot until 5 PM:
-- Check current timeouts
SHOW VARIABLES LIKE '%timeout%';
-- Set to 3 minutes for stateless web applications
SET GLOBAL wait_timeout = 180;
SET GLOBAL interactive_timeout = 180;
Persist in my.cnf. Be aware: connection pools with keepalive intervals longer than wait_timeout will see MySQL server has gone away (error 2006) — tune pool_recycle in your app to be shorter than wait_timeout.
Step 7: ProxySQL for High-Scale Environments
When you have 50+ application instances each with their own pool of 20 connections, the math is brutal: 50 × 20 = 1,000 connections to MySQL. ProxySQL acts as a connection multiplexer and query router, mapping thousands of upstream application connections to a small set of persistent MySQL connections:
# Architecture after ProxySQL:
# 50 app pods × 20 pool_size = 1,000 connections to ProxySQL
# ProxySQL maintains 50–100 persistent connections to MySQL
# MySQL max_connections = 150 is now sufficient
# Install on Ubuntu
wget https://github.com/sysown/proxysql/releases/download/v2.6.3/proxysql_2.6.3-ubuntu22_amd64.deb
dpkg -i proxysql_2.6.3-ubuntu22_amd64.deb
systemctl enable --now proxysql
# Configure via the admin interface (port 6032)
mysql -u admin -padmin -h 127.0.0.1 -P 6032
Step 8: Recovering from MySQL Crash, OOM Kill, or Disk Full
OOM kill recovery:
# Confirm mysqld was killed by the OOM killer
journalctl -u mysqld --since '1 hour ago' | grep -i 'killed\|oom'
dmesg | grep -iE 'oom_kill|mysqld' | tail -20
# Start MySQL — InnoDB will run crash recovery automatically
systemctl start mysqld
tail -f /var/log/mysql/error.log
# Prevent future OOM kills:
# 1. Reduce innodb_buffer_pool_size to 60-70% of available RAM
# 2. Add swap: fallocate -l 4G /swapfile && mkswap /swapfile && swapon /swapfile
# 3. Set vm.swappiness = 10 in /etc/sysctl.conf
Disk full recovery:
# Find what's consuming space
du -sh /var/lib/mysql/* | sort -rh | head -20
# Purge old binary logs (safe if not used for replication)
mysql -u root -p -e "PURGE BINARY LOGS BEFORE DATE_SUB(NOW(), INTERVAL 3 DAY);"
# Show binary log retention setting
mysql -u root -p -e "SHOW VARIABLES LIKE 'binlog_expire_logs_seconds';"
# Disable binary logging if replication is not in use (add to my.cnf)
# skip-log-bin
InnoDB crash recovery (if mysqld won't start after crash):
# Add to [mysqld] in my.cnf, start MySQL, export data, then remove
innodb_force_recovery = 1
# If 1 fails, increment to 2, 3, ... 6
# Level 6 = read-only, may lose data. Always try lowest level first.
Step 9: Monitoring — Catch Connection Pressure Before It Becomes an Outage
Alert before you hit max_connections:
# Quick one-liner for dashboards/cron
mysqladmin -u root -p extended-status 2>/dev/null | \
grep -E 'Threads_connected|Max_used_connections|Connection_errors_max'
# Prometheus alert rule (with mysqld_exporter)
# Alert when connections exceed 80% of max_connections
# mysql_global_status_threads_connected
# / mysql_global_variables_max_connections > 0.8
Frequently Asked Questions
#!/usr/bin/env bash
# mysql-connection-diag.sh — MySQL Too Many Connections diagnostic script
# Usage: MYSQL_USER=root MYSQL_PASS=secret bash mysql-connection-diag.sh
# Requires: PROCESS privilege on MySQL, root on the Linux host
MYSQL_USER="${MYSQL_USER:-root}"
MYSQL_PASS="${MYSQL_PASS:-}"
MYSQL_PASS_ARG="${MYSQL_PASS:+-p${MYSQL_PASS}}"
MYSQL="mysql -u${MYSQL_USER} ${MYSQL_PASS_ARG} -N -B"
echo '========================================'
echo ' MySQL Connection Diagnostic'
echo '========================================'
echo ''
echo '--- 1. Service Status ---'
systemctl is-active mysqld mysql 2>/dev/null | head -1 || echo 'Could not determine service status'
echo ''
echo '--- 2. Disk Space (MySQL data directory + tmp) ---'
df -h /var/lib/mysql /tmp 2>/dev/null
echo ''
echo '--- 3. OOM Kills in Kernel Log (last 15) ---'
dmesg 2>/dev/null | grep -iE 'oom|killed process|mysqld' | tail -15 || echo 'No OOM events found (or dmesg unavailable)'
echo ''
echo '--- 4. MySQL Error Log Tail (last 20 lines) ---'
ERRLOG=$(${MYSQL} -e "SHOW VARIABLES LIKE 'log_error'" 2>/dev/null | awk '{print $2}')
if [ -f "${ERRLOG}" ]; then
tail -20 "${ERRLOG}"
else
echo "Error log not found at: ${ERRLOG}"
fi
echo ''
echo '--- 5. Connection Statistics ---'
${MYSQL} <<'EOF'
SELECT
(SELECT VARIABLE_VALUE FROM performance_schema.global_variables
WHERE VARIABLE_NAME = 'max_connections') AS max_connections,
(SELECT VARIABLE_VALUE FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Threads_connected') AS current_connections,
(SELECT VARIABLE_VALUE FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Max_used_connections') AS peak_connections,
(SELECT VARIABLE_VALUE FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Connection_errors_max_connections') AS refused_count\G
EOF
echo ''
echo '--- 6. Connection Breakdown by Command ---'
${MYSQL} -e "SELECT command, COUNT(*) AS cnt FROM information_schema.processlist GROUP BY command ORDER BY cnt DESC;"
echo ''
echo '--- 7. Top Connection Sources (host + user) ---'
${MYSQL} <<'EOF'
SELECT
SUBSTRING_INDEX(host, ':', 1) AS app_host,
user,
COUNT(*) AS connections,
SUM(CASE WHEN command = 'Sleep' THEN 1 ELSE 0 END) AS sleeping
FROM information_schema.processlist
GROUP BY app_host, user
ORDER BY connections DESC
LIMIT 20;
EOF
echo ''
echo '--- 8. Long-Running Queries (> 30 seconds, non-sleeping) ---'
${MYSQL} <<'EOF'
SELECT id, user, host, time, state,
LEFT(IFNULL(info, '[NULL]'), 120) AS query
FROM information_schema.processlist
WHERE command != 'Sleep' AND time > 30
ORDER BY time DESC
LIMIT 10;
EOF
echo ''
echo '--- 9. Timeout Variables ---'
${MYSQL} -e "SHOW VARIABLES LIKE '%timeout%';" | grep -E 'wait_timeout|interactive_timeout|connect_timeout'
echo ''
echo '--- 10. Recommendations ---'
THREADS=$(${MYSQL} -e "SELECT VARIABLE_VALUE FROM performance_schema.global_status WHERE VARIABLE_NAME='Threads_connected';" 2>/dev/null)
MAXCONN=$(${MYSQL} -e "SELECT VARIABLE_VALUE FROM performance_schema.global_variables WHERE VARIABLE_NAME='max_connections';" 2>/dev/null)
SLEEPING=$(${MYSQL} -e "SELECT COUNT(*) FROM information_schema.processlist WHERE command='Sleep';" 2>/dev/null)
if [ -n "${THREADS}" ] && [ -n "${MAXCONN}" ]; then
PCT=$(( THREADS * 100 / MAXCONN ))
echo "Connected: ${THREADS} / ${MAXCONN} (${PCT}%)"
echo "Sleeping: ${SLEEPING}"
if [ "${PCT}" -ge 80 ]; then
echo ""
echo "WARNING: >80% of max_connections used. Recommended actions:"
echo " 1. SET GLOBAL max_connections = $((MAXCONN * 2));"
echo " 2. SET GLOBAL wait_timeout = 180;"
echo " 3. Kill sleeping connections: SELECT CONCAT('KILL ',id,';') FROM information_schema.processlist WHERE command='Sleep' AND time > 60;"
fi
else
echo "Could not retrieve connection counts — check MySQL credentials."
fi
echo ''
echo '========================================'
echo ' Diagnostic complete'
echo '========================================'Error Medic Editorial
The Error Medic Editorial team consists of senior DevOps engineers, SREs, and database administrators with combined experience across high-traffic MySQL deployments on AWS RDS, Google Cloud SQL, and self-managed bare-metal clusters. Our guides are built from real incident postmortems, covering everything from connection pool misconfiguration to InnoDB crash recovery.
Sources
- https://dev.mysql.com/doc/refman/8.0/en/too-many-connections.html
- https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_connections
- https://dev.mysql.com/doc/refman/8.0/en/innodb-recovery.html
- https://stackoverflow.com/questions/1763306/mysql-error-1040-too-many-connections
- https://proxysql.com/documentation/main-runtime/
- https://dev.mysql.com/doc/refman/8.0/en/kill.html