When Redis Object Cache Silently Drops Keys — Diagnosing Memory Eviction on a WooCommerce Store

· 10 min read

The Green Light That Lies

A client's WooCommerce store — around 1,200 products, 400 orders per day — had Redis object caching enabled for over a year. The Redis Object Cache plugin showed a green "Connected" status. The hit ratio hovered around 78%. Everything looked fine in the dashboard.

But the client kept reporting intermittent slowdowns. Product pages would occasionally take 6-8 seconds to load. Worse, a customer spotted stale pricing on a variable product — the price had been updated two days earlier but the old value was still appearing for some visitors. The kind of issue that quietly erodes trust and revenue.

I SSH'd in expecting a quick investigation. It turned into one of those sessions where each layer you peel back reveals another problem underneath.

Checking What "Connected" Actually Means

The Redis Object Cache plugin confirming a connection tells you almost nothing about Redis health. It is the equivalent of checking whether your car has fuel without looking at the engine temperature.

The real health check starts here:

redis-cli INFO stats | grep -E "keyspace_hits|keyspace_misses|evicted_keys"

On this server, the output was:

keyspace_hits:4821903
keyspace_misses:2917445
evicted_keys:1438291

That evicted_keys number was the red flag. Nearly 1.5 million keys had been thrown away by Redis because it ran out of memory. Every evicted key meant WordPress fell back to a database query — silently, with no error, no log entry, no warning in the admin panel.

The effective hit rate when accounting for evictions was far worse than the plugin reported. WordPress was hammering the database on every evicted key lookup, which explained the intermittent slowdowns. Traffic spikes pushed more keys into Redis, more keys got evicted, more queries hit MariaDB, and the whole thing cascaded.

The 64MB Trap

Next I checked the memory configuration:

redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|maxmemory_policy"
used_memory_human:63.87M
maxmemory_human:64.00M
maxmemory_policy:allkeys-lru

There it was. Redis had been allocated 64MB — a default that is common on managed hosting and CloudPanel setups. For a blog with a dozen posts, 64MB is fine. For a WooCommerce store with 1,200 products, hundreds of variations, complex tax rules, and shipping zones, 64MB fills up in minutes.

The eviction policy was allkeys-lru (Least Recently Used), which sounds reasonable but has a subtle problem for WordPress. LRU evicts keys that haven't been accessed recently. But WordPress's access pattern isn't uniform — the alloptions key and WooCommerce session data get hit on every single request, while individual product cache keys get hit less frequently. Under LRU, a product page that hasn't been viewed in 30 minutes loses its cache, even though it might be the store's best seller viewed 200 times a day. The eviction algorithm doesn't account for frequency, only recency.

The Multi-Site Key Collision

While running diagnostics, I noticed something else. This VPS hosted three WordPress sites — the WooCommerce store and two smaller brochure sites. I scanned the Redis keyspace:

redis-cli --scan --pattern "*" | head -50

All three sites were writing to the same Redis database (db 0) with no prefix differentiation. Keys from the brochure sites were competing for space with WooCommerce product data. Worse, I found evidence of key collisions — both the WooCommerce store and one brochure site used the same theme, and their transient keys had identical names.

This explained the stale pricing issue. When site B wrote a transient with the same key name as site A's cached product data, site A would read site B's value on the next request. The data wasn't just stale — it was from the wrong site entirely.

I confirmed by checking each site's wp-config.php:

grep -r "WP_REDIS" /home/*/htdocs/wp-config.php

None of them defined WP_REDIS_PREFIX or WP_REDIS_DATABASE. All three sites were sharing a single, undersized Redis namespace.

The alloptions Race Condition

There was one more issue hiding in the mix. The client mentioned that toggling WooCommerce settings — enabling a shipping method, changing a tax rate — sometimes didn't take effect for several minutes.

WordPress stores most of its options in a single cached key called alloptions. On every page load, WordPress fetches this key from Redis instead of running a SELECT query against wp_options. The problem is a known race condition: if two requests arrive simultaneously, one reads the old alloptions while the other writes the updated version. The stale read gets cached back to Redis, overwriting the fresh value.

With a local object cache (like APCu), this resolves quickly because each PHP-FPM worker has its own cache. With Redis, the stale value persists across all workers and all requests until it either expires or gets explicitly flushed.

I confirmed this was happening by watching Redis in real time during an option update:

redis-cli MONITOR | grep alloptions

Two SET commands arrived within 3 milliseconds of each other — the second one overwrote the first with older data.

The Fix

I addressed all three problems in sequence.

1. Increased Redis memory and switched eviction policy

In /etc/redis/redis.conf:

maxmemory 512mb
maxmemory-policy allkeys-lfu

The switch from allkeys-lru to allkeys-lfu (Least Frequently Used) is significant. Available since Redis 4.0, LFU tracks access frequency, not just recency. Hot keys like alloptions and WooCommerce session data that get hit on every request are retained, while genuinely unused keys are evicted first. For WordPress's access pattern, LFU is strictly better than LRU.

I sized the memory at 512MB based on the working set. After a flush and warmup, used_memory stabilised at around 310MB — leaving headroom for traffic spikes.

sudo systemctl restart redis-server

2. Isolated each site's cache namespace

In each site's wp-config.php, I added unique prefixes and database numbers:

/* Site 1 - WooCommerce store */
define('WP_REDIS_DATABASE', 0);
define('WP_REDIS_PREFIX', 'woo_store:');

/* Site 2 - Brochure site */
define('WP_REDIS_DATABASE', 1);
define('WP_REDIS_PREFIX', 'brochure_a:');

/* Site 3 - Brochure site */
define('WP_REDIS_DATABASE', 2);
define('WP_REDIS_PREFIX', 'brochure_b:');

Using both WP_REDIS_DATABASE (separate logical databases) and WP_REDIS_PREFIX (key namespacing) provides belt-and-braces isolation. Either one alone would work, but both together eliminates any possibility of key collision.

After adding the constants, I flushed each site's object cache:

wp cache flush --path=/home/store/htdocs
wp cache flush --path=/home/brochure-a/htdocs
wp cache flush --path=/home/brochure-b/htdocs

3. Switched from Predis to PhpRedis

The site was using the Predis PHP library as its Redis client. Predis is a pure-PHP implementation that hasn't seen meaningful development since 2022. PhpRedis is a compiled C extension — faster, lower memory overhead, and actively maintained.

sudo apt install php8.2-redis
sudo systemctl restart php8.2-fpm

Then in wp-config.php:

define('WP_REDIS_CLIENT', 'phpredis');

4. Added connection resilience settings

To prevent the cache layer from becoming a liability during Redis restarts or brief outages:

define('WP_REDIS_TIMEOUT', 1);
define('WP_REDIS_READ_TIMEOUT', 1);

With these settings, if Redis becomes unreachable, WordPress falls back to database queries within 1 second instead of hanging for the default 5-second timeout. On a store doing 400 orders a day, those 4 extra seconds per request during a Redis restart would queue up PHP-FPM workers and risk a cascading failure.

The Results

After the changes, I monitored for a week:

redis-cli INFO stats | grep evicted_keys
evicted_keys:0

Zero evictions. The hit ratio climbed from 78% to 96%. Average page load dropped from 1.8 seconds to 0.6 seconds. The intermittent 6-8 second spikes disappeared entirely. MariaDB CPU usage dropped by roughly 40% because Redis was actually doing its job now.

The stale pricing issue never recurred — key isolation eliminated the cross-site contamination, and LFU eviction ensured the alloptions key stayed in cache reliably.

Monitoring Redis Properly

I added a simple monitoring script to the server's crontab that alerts when eviction starts or memory usage exceeds 80%:

#!/bin/bash
EVICTED=$(redis-cli INFO stats | grep evicted_keys | cut -d: -f2 | tr -d '\r')
MEM_USED=$(redis-cli INFO memory | grep "^used_memory:" | cut -d: -f2 | tr -d '\r')
MEM_MAX=$(redis-cli INFO memory | grep "^maxmemory:" | cut -d: -f2 | tr -d '\r')

if [ "$EVICTED" -gt 100 ]; then
  echo "Redis eviction alert: $EVICTED keys evicted" | mail -s "Redis Alert" [email protected]
fi

if [ "$MEM_MAX" -gt 0 ]; then
  PERCENT=$((MEM_USED * 100 / MEM_MAX))
  if [ "$PERCENT" -gt 80 ]; then
    echo "Redis memory at ${PERCENT}%" | mail -s "Redis Memory Alert" [email protected]
  fi
fi

You can also check Redis health quickly via WP-CLI:

wp redis status

The key metrics to watch on an ongoing basis: evicted_keys should be zero or near-zero, hit_rate should be above 90%, and used_memory should have at least 20% headroom below maxmemory.

The Takeaway

Redis is one of those tools that works brilliantly when configured properly and fails silently when it isn't. The default settings — 64MB memory, no key prefix, LRU eviction — are inadequate for any WooCommerce store of meaningful size. And because the failure mode is "fall back to database queries without logging anything," you can run for months with a broken cache and never know it.

This is exactly the kind of silent degradation that proactive maintenance catches. Not a dramatic outage, not an error message — just a slow bleed of performance and data integrity that nobody notices until a customer sees the wrong price.

Need help with your Redis setup or WooCommerce performance? Check out my maintenance plans or read about how I handled a full-blown WooCommerce database meltdown when Redis wasn't in the picture at all.

Stop Firefighting. Start Maintaining.

I manage 70+ WordPress sites for UK agencies and businesses. Whether you need ongoing maintenance, emergency support, or a one-off performance fix — I can help.

View Maintenance Plans Get in Touch

Get in Touch to Discuss Your Needs