How I Traced a Data Leak Between Two WordPress Sites to a Redis Cache Key Collision
· 9 min read
The Symptom: Another Client's Products in the Wrong Dashboard
A client running two WooCommerce stores on the same VPS flagged something strange. Their UK store was intermittently showing product data from their EU store in the admin dashboard. Category counts were wrong, widget content flickered between sites, and the WooCommerce status page occasionally displayed the wrong store's order totals.
At first glance it looked like a database issue — maybe a shared table prefix causing cross-contamination. But both sites had distinct databases. The real culprit was something I see regularly when managing multi-site servers: a single Redis instance serving multiple WordPress installations without proper key isolation.
The Investigation
Both sites were running the Redis Object Cache plugin (version 2.5.4) with PhpRedis on a CloudPanel server. Redis was installed once, listening on 127.0.0.1:6379, and both wp-config.php files had near-identical configurations:
// Site A: uk-store wp-config.php
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);
// Site B: eu-store wp-config.php
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);
No WP_CACHE_KEY_SALT. No WP_REDIS_PREFIX. No WP_REDIS_DATABASE. Both sites were writing to Redis database 0 with identical key structures.
I confirmed the collision by connecting to Redis directly:
redis-cli
127.0.0.1:6379> SELECT 0
OK
127.0.0.1:6379> KEYS *options*
The output showed keys like wp_:options:alloptions — but there was only one copy. Whichever site last populated that cache key won the race. The other site then read stale or completely wrong data from the shared key.
To verify the scope of contamination, I checked how many keys existed versus how many there should be for two separate sites:
127.0.0.1:6379> DBSIZE
(integer) 4217
For two active WooCommerce stores, this number was far too low. There should have been roughly double that. Keys were being silently overwritten every time either site refreshed its cache.
Why This Happens
WordPress object caching generates Redis keys using the database table prefix — typically wp_. The key format looks like this:
wp_:options:alloptions
wp_:posts:42
wp_:term_meta:15
When two WordPress installations use the same table prefix (wp_) and connect to the same Redis database, their keys are identical. There is no built-in mechanism to distinguish which site owns which key.
This is not a bug in Redis or in the plugin. Redis is doing exactly what it is told to do. The problem is that nobody told it these were two different applications.
The symptoms are often intermittent because cache entries expire and get regenerated. Whichever site triggers the regeneration first writes its data to the shared key. The other site then reads it. During low traffic periods, you might not notice anything wrong. Under load, the flickering between datasets becomes obvious.
The Fix
The fix involves two complementary changes: assigning a unique cache key prefix per site, and optionally separating them into different Redis databases.
Step 1: Add a Unique Cache Key Prefix
In each site's wp-config.php, add a WP_REDIS_PREFIX constant (or WP_CACHE_KEY_SALT if running an older version of the Redis Object Cache plugin):
// Site A: uk-store wp-config.php
define('WP_REDIS_PREFIX', 'uk-store');
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);
// Site B: eu-store wp-config.php
define('WP_REDIS_PREFIX', 'eu-store');
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);
The prefix should be short, human-readable, and unique per site. Avoid hashes or UUIDs — when you are debugging at 2am, uk-store:wp_:options:alloptions is far easier to parse than a8f3e...:wp_:options:alloptions.
Step 2: Assign Separate Redis Databases (Optional but Recommended)
Redis provides 16 logical databases by default (numbered 0–15). Assigning each site its own database adds a second layer of isolation and makes it easy to flush one site's cache without affecting others:
// Site A
define('WP_REDIS_DATABASE', 0);
// Site B
define('WP_REDIS_DATABASE', 1);
If you are managing more than 16 sites on a single server, you will need to rely solely on prefixes — or increase the databases directive in /etc/redis/redis.conf:
databases 32
Then restart Redis:
sudo systemctl restart redis-server
Step 3: Flush and Verify
After updating both wp-config.php files, flush the existing contaminated cache:
redis-cli FLUSHALL
Then verify that each site is now writing keys with the correct prefix:
redis-cli
127.0.0.1:6379> KEYS uk-store*
127.0.0.1:6379> SELECT 1
OK
127.0.0.1:6379[1]> KEYS eu-store*
You should now see entirely separate keyspaces. Run a quick wp cache flush via WP-CLI on each site to confirm the object cache drop-in is working correctly:
wp cache flush --path=/var/www/uk-store
wp cache flush --path=/var/www/eu-store
Both should return Success: The cache was flushed. If you see an error about Redis not being reachable, double-check the WP_CACHE constant and ensure the object-cache.php drop-in is present in wp-content/.
Preventing This on New Server Builds
I now include Redis prefix configuration in every server setup checklist. For anyone managing multiple WordPress sites on a single server, here is the minimum wp-config.php Redis block I use:
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_REDIS_DATABASE', 0); // increment per site
define('WP_REDIS_PREFIX', 'sitename'); // unique per site
define('WP_REDIS_TIMEOUT', 2);
define('WP_REDIS_READ_TIMEOUT', 2);
define('WP_CACHE', true);
The timeout values prevent PHP-FPM workers from hanging if Redis becomes unresponsive — a separate but related problem I have written about in the context of PHP-FPM worker exhaustion.
For servers using Object Cache Pro instead of the free Redis Object Cache plugin, the configuration is slightly different but the principle is the same:
define('WP_REDIS_CONFIG', [
'token' => 'your-licence-token',
'host' => '127.0.0.1',
'port' => 6379,
'database' => 0,
'prefix' => 'sitename',
'timeout' => 2.0,
'read_timeout' => 2.0,
]);
Monitoring for Collisions
If you suspect a collision but are not sure, the redis-cli MONITOR command shows every command hitting Redis in real time. Open two terminal sessions, load each site in a browser, and watch:
redis-cli MONITOR | grep -i "options:alloptions"
If you see writes to the same key from different PHP-FPM pools (check the source IP/port), you have a collision. You can also audit programmatically with WP-CLI:
wp redis info --path=/var/www/uk-store
This outputs connection details, hit rates, and the configured prefix — an easy way to verify each site's configuration without reading wp-config.php directly.
The Bigger Picture
This issue is particularly common on agency servers where new client sites get spun up from a template. The template has Redis enabled — great for performance — but nobody changes the prefix. It works fine with one site. The second site triggers the collision.
I have seen this on CloudPanel, cPanel/WHM, and RunCloud setups. It is not specific to any hosting panel. Any environment where a single Redis instance serves multiple WordPress installations is vulnerable unless explicitly configured otherwise.
The data leak is a real concern too. In this case it was product data leaking between two stores owned by the same client. But on a shared agency server, it could be one client's draft posts, private pages, or customer data appearing in another client's cache. That is a GDPR problem, not just a debugging inconvenience.
Stop Firefighting. Start Maintaining.
I manage 70+ WordPress sites for UK agencies and businesses. Whether you need ongoing maintenance, emergency support, or a one-off performance fix — I can help.
