Error Medic

Troubleshooting Redis Error: OOM command not allowed when used memory > 'maxmemory'

Fix the Redis 'OOM command not allowed when used memory > maxmemory' error. Learn how to configure maxmemory-policy, scale RAM, and optimize memory usage.

Last updated:
Last verified:
1,464 words
Key Takeaways
  • Root Cause 1: The Redis instance has reached its explicitly configured `maxmemory` limit and cannot accept new write operations.
  • Root Cause 2: An inappropriate `maxmemory-policy` (like `noeviction`) is set, preventing Redis from automatically deleting older keys to free up space.
  • Quick Fix: Dynamically change the eviction policy to `allkeys-lru` or `volatile-lru`, or increase the `maxmemory` allocation limit via `redis-cli`.
Fix Approaches Compared
MethodWhen to UseTimeRisk
Change Eviction Policy (`maxmemory-policy`)When Redis acts purely as a cache and older/less used data can be safely discarded.Fast (< 5 mins)Low (Data loss is expected for caching scenarios)
Increase `maxmemory` LimitWhen the dataset genuinely needs more RAM and the underlying host system has available physical memory.Fast (< 5 mins)Low (Assuming the OS has sufficient free RAM)
Scale Up Server/InstanceWhen the host OS is completely out of memory and `maxmemory` cannot be safely increased locally.Slow (Minutes to hours)Medium (May require application downtime or failovers)
Optimize Data Structures & Implement TTLsAs a long-term fix to reduce the overall memory footprint per key and prevent unbounded growth.Slow (Requires code changes)Medium (Requires application testing and deployment)

Understanding the Error

The OOM command not allowed when used memory > 'maxmemory' error in Redis (often manifesting in application logs as Redis::CommandError: OOM command not allowed...) is a definitive signal that your Redis instance has exhausted its permitted memory allocation. Redis is fundamentally an in-memory data structure store; all keys and values must reside in Random Access Memory (RAM) for fast access.

When a Redis server is provisioned, administrators typically define a maxmemory directive within the redis.conf configuration file. This critical setting instructs the Redis process on the absolute maximum amount of RAM it is permitted to consume for data storage. Once this defined threshold is breached, Redis relies on the configured maxmemory-policy to determine the subsequent course of action.

If the eviction policy is set to noeviction (which serves as the default safety mechanism in many Redis distributions), or if a different policy attempts to evict keys but fails to reclaim sufficient memory, Redis transitions into a defensive, read-only state. It will actively reject any incoming commands that would result in further memory allocation. This encompasses commands such as SET, LPUSH, HSET, ZADD, and even EVAL scripts that generate new keys. Conversely, read-oriented commands like GET, LRANGE, and SMEMBERS will continue to function without interruption, allowing applications to read existing data but preventing state mutations.

Step 1: Diagnose the Current Memory Usage

Before executing immediate remediation steps, it is vital to diagnose the root cause of the memory exhaustion. Is the system experiencing an organic, gradual increase in data volume, a sudden surge in traffic, a runaway process creating orphaned keys, or a lack of proper Time-To-Live (TTL) expirations?

Connect to your affected Redis instance using the redis-cli tool and execute the INFO memory command. This will provide a comprehensive breakdown of memory utilization. Pay close attention to these specific metrics:

  • used_memory_human: The total amount of memory currently allocated by the Redis process for data and internal structures.
  • maxmemory_human: The hard limit configured for the instance.
  • maxmemory_policy: The currently active eviction strategy.
127.0.0.1:6379> INFO memory
# Memory
used_memory:1073741824
used_memory_human:1.00G
...
maxmemory:1073741824
maxmemory_human:1.00G
maxmemory_policy:noeviction

If the used_memory metric is approximately equal to or slightly exceeding the maxmemory value, and your active policy is noeviction, you have positively identified the immediate cause of the application failures.

Step 2: Implement Immediate Remediation

The appropriate immediate fix depends heavily on the operational role of this specific Redis instance. Is it functioning as an ephemeral cache or as a persistent primary datastore?

Approach A: Configure an Eviction Policy (For Caching Scenarios)

If the Redis instance is utilized strictly as a cache (e.g., storing rendered HTML fragments, API responses, or temporary computation results), the architectural expectation is that older, less relevant data can be discarded to accommodate new entries. In this scenario, altering the maxmemory-policy is the correct approach.

The most frequently utilized eviction policies include:

  • allkeys-lru: Evicts the least recently used (LRU) keys across the entire dataset, regardless of TTL settings. This is often the most effective generic caching policy.
  • volatile-lru: Evicts the least recently used keys, but only among those that have an explicit expiration (TTL) configured.
  • allkeys-lfu: Evicts the least frequently used (LFU) keys, which tracks access frequency rather than mere recency.

You can dynamically update this policy without restarting the Redis service:

127.0.0.1:6379> CONFIG SET maxmemory-policy allkeys-lru
OK

Crucial: To ensure this configuration survives a service restart or server reboot, you must permanently append or modify this directive in your redis.conf file:

# /etc/redis/redis.conf
maxmemory-policy allkeys-lru
Approach B: Increase the Memory Limit (For Primary Datastore Scenarios)

If Redis is acting as a primary system of record—such as storing critical user session state, application job queues (like Sidekiq or Celery), or real-time leaderboards—data loss via eviction is typically unacceptable. Changing the eviction policy might corrupt application state. Instead, you must allocate a higher memory ceiling, provided the underlying host machine (or container limits in Kubernetes/Docker) possesses available physical RAM.

Dynamically update the maximum limit (e.g., expanding it to 4 Gigabytes):

127.0.0.1:6379> CONFIG SET maxmemory 4gb
OK

Again, synchronize this change with your persistent configuration:

# /etc/redis/redis.conf
maxmemory 4gb

Step 3: Long-Term Architectural Optimizations

Continuously provisioning additional RAM is a cost-inefficient strategy that merely delays inevitable architectural bottlenecks. To ensure long-term stability, integrate these optimization practices:

  1. Enforce Expirations (TTL): Audit your application code to guarantee that volatile keys are created with explicit expiration times using EXPIRE key seconds or atomic commands like SET key value EX seconds. Orphaned keys without TTLs are the primary driver of creeping OOM situations.
  2. Utilize Memory-Efficient Data Structures: Redis is highly optimized for specific data types. Hashes (HSET) are exceptionally memory-efficient when they encapsulate a small number of fields (typically under 512). Instead of storing 1,000 separate string keys representing a user profile, store them as 1,000 fields within a single Redis Hash.
  3. Perform Routine Key Space Analysis: Proactively monitor the keyspace using built-in utilities like redis-cli --bigkeys to identify abnormally large individual keys or anomalous patterns consuming disproportionate memory. For granular insights, utilize tools to analyze an RDB snapshot offline to pinpoint memory waste without impacting production performance.

Frequently Asked Questions

bash
# 1. Connect to Redis and check current memory statistics
redis-cli INFO memory

# 2. Verify current configuration limits and policies
redis-cli CONFIG GET maxmemory
redis-cli CONFIG GET maxmemory-policy

# 3. Dynamic Fix A: Change eviction policy to LRU (Recommended for Cache workloads)
redis-cli CONFIG SET maxmemory-policy allkeys-lru

# 4. Dynamic Fix B: Increase absolute memory limit to 4GB (Recommended for Datastore workloads)
redis-cli CONFIG SET maxmemory 4gb

# 5. Diagnostic: Scan for abnormally large keys causing memory pressure
redis-cli --bigkeys
E

Error Medic Editorial

Our team of seasoned Site Reliability Engineers (SRE) and DevOps architects provide actionable, production-tested solutions for complex infrastructure and database issues.

Sources

Related Guides